var/home/core/zuul-output/0000755000175000017500000000000015145070051014523 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015145074027015476 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000241114315145073655020267 0ustar corecorewikubelet.log_o[;r)Br'o-n(!9t%Cs7}g/غIs,r.k9GfB ?,Eڤ펯_ˎ6Ϸ7+%f?ᕷox[o8W5N!Kޒ/h3_.gSeq5v(×_~^ǿq]n>߮}+ԏbś E^"Y^-Vۋz7wH׋0g"ŒGǯguz|ny;#)a "b BLc?^^4[ftlR%KF^j 8DΆgS^Kz۞_W#|`zIlp_@oEy5 fs&2x*g+W4m ɭiE߳Kfn!#Šgv cXk?`;'`&R7߿YKS'owHF6":=3Ȑ 3xҝd){Ts}cZ%BdARO#-o"D"ޮrFg4" 0ʡPBU[fi;dYu' IAgfPF:c0Ys66q tH6#.`$vlLH}ޭA㑝V0>|J\Pg\W#NqɌDSd1d9nT#Abn q1J# !8,$RNI? j!bE"o j/o\E`r"hA ós yi\[.!=A(%Ud,QwC}F][UVYE NQGn0Ƞɻ>.ww}(o./WY<͉#5O H 'wo6C9yg|O~ €'} S[q?,!yq%a:y<\tunL h%$Ǥ].v y[W_` \r/Ɛ%aޗ' B.-^ mQYd'xP2ewEڊL|^ͣrZg7n͐AG%ʷr<>; 2W>h?y|(G>ClsXT(VIx$(J:&~CQpkۗgVKx*lJ3o|s`<՛=JPBUGߩnX#;4ٻO2{Fݫr~AreFj?wQC9yO|$UvވkZoIfzC|]|[>ӸUKҳt17ä$ ֈm maUNvS_$qrMY QOΨN!㞊;4U^Z/ QB?q3En.اeI"X#gZ+Xk?povR]8~깮$b@n3xh!|t{: CºC{ 8Ѿm[ ~z/9آs;DPsif39HoN λC?; H^-¸oZ( +"@@%'0MtW#:7erԮoQ#% H!PK)~U,jxQV^pΣ@Klb5)%L%7׷v] gv6دϾDD}c6  %T%St{kJ_O{*Z8Y CEO+'HqZY PTUJ2dic3w ?YQgpa` Z_0΁?kMPc_Ԝ*΄Bs`kmJ?t 53@հ1hr}=5t;nt 9:I_|AאM'NO;uD,z҄R K&Nh c{A`?2ZҘ[a-0V&2D[d#L6l\Jk}8gf) afs'oIf'mf\>UxR ks J)'u4iLaNIc2qdNA&aLQVD R0*06V۽棬mpھ*V I{a 0Ҟҝ>Ϗ ,ȓw`Ȅ/2Zjǽ}W4D)3N*[kPF =trSE *b9ē7$ M_8.Ç"q ChCMAgSdL0#W+CUu"k"圀̲F9,,&h'ZJz4U\d +( 7EqڏuC+]CEF 8'9@OVvnNbm: X„RDXfיa }fqG*YƩ{P0K=( $hC=h2@M+ `@P4Re]1he}k|]eO,v^ȹ [=zX[tꆯI7c<ۃ'B쿫dIc*Qqk&60XdGY!D ' @{!b4ִ s Exb 5dKߤKߒ'&YILұ4q6y{&G`%$8Tt ȥ#5vGVO2Қ;m#NS8}d0Q?zLV3\LuOx:,|$;rVauNjk-ؘPꐤ`FD'JɻXC&{>.}y7Z,).Y톯h7n%PAUË?/,z_jx܍>М>ӗom$rۇnu~Y݇̇TIwӜ'}׃nxuoỴRZ&Yzbm ]) %1(Y^9{q"4e?x+ [Vz;E|d1&ږ/0-Vb=SSO|k1A[|gbͧɇد;:X:@;afU=Sru CK >Y%LwM*t{zƝ$;ȾjHim @tBODɆj>0st\t@HTu( v e`H*1aK`3CmF1K>*Mk{_'֜dN${OT-n,'}6ȴ .#Sqη9]5zoX#ZVOy4%-Lq6dACYm*H@:FUф(vcD%F"i ' VVdmcOTKpwq.M?m12N[=tuw}opYG]2u<ΰ+a1tHayɒ aY(P*aaʨ@ΰ<pX X{k[%Egl1$9  ֲQ$'dJVE%mT{z`R$77.N|b>harNJ(Bň0ae3V#b,PY0TEu1L/]MTB4$`H6NI\nbǛ*AyA\(u|@ [h-,j7gDTÎ4oWJ$j!frH_HI\:U}UE$J @ٚeZE0(8ŋ ϓ{BpY]Q4`Iz_*2coT'ƟlQ.Ff!bpRw@\6"yr+i37Z_j*YLfnYJ~Z~okJX ?A?gU3U;,ד1t7lJ#wՆ;I|p"+I4ˬZcն a.1wXhxDI:;.^m9W_c.4z+ϟMn?!ԫ5H&=JkܓhkB\LQ"<LxeLo4l_m24^3.{oɼʪ~75/nQ?s d|pxu\uw?=QR -Mݞίk@Pc n1æ*m$=4Dbs+J \EƄզ}@۶(ߐ/ۼ𹫘qݎt7Ym݃|M$ 6.x5 TMXbXj-P\jА޴y$j`ROA"EkuS#q * CƂ lu" yo6"3껝I~flQ~NCBX`]ڦÞhkXO _-Qy2$?T3ͤEZ긊mۘ$XD.bͮW`AީClСw5/lbl[N*t*@56."D/< {Dۥ sLxZn$N(lYiV =?_e^0)?]{ @| 6+#gPX>Bk2_@L `CZ?z3~ }[ tŪ)۲-9ֆP}b&x Uhm._O 4m6^^osVЦ+*@5Fˢg'!>$]0 5_glg}릅h:@61Xv` 5DFnx ˭jCtu,R|ۯG8`&ו:ݓ3<:~iXN9`2ŦzhѤ^ MW`c?&d.'[\]}7A[?~R6*.9t,綨 3 6DFe^u; +֡X< paan}7ftJ^%0\?mg5k][ip4@]p6Uu|܀|Kx6خQU2KTǺ.ȕPQVzWuk{n#NWj8+\[ ?yiI~fs[:.۽ '5nWppH? 8>X+m7_Z`V j[ s3nϏT=1:T <= pDCm3-b _F(/f<8sl, 0۬Z"X.~b٦G3TE.֣eմi<~ik[m9뀥!cNIl8y$~\T B "2j*ҕ;ێIs ɛqQQKY`\ +\0(FęRQ hN œ@n|Vo|6 8~J[,o%l%!%tyNO}}=ʬ-'vlQ]m"ifӠ1˟ud9)˔~BѤ]һS8]uBi( Ql{]UcLxٻa,2r(#'CDd2݄kTxn@v7^58þ Ţ&VY+yn~F8I !6WB3C%X)ybLFB%X2U6vw8uUF+X|YukXxVO(+gIQp؎Z{TcR@MSRδ~+1æ|mq՗5$B᲋eY(|*磎\Dži`dZe j'V!Mu@ KV{XץF .Jg< ƜINs:b zĄu3=Az4 u5'og^s7`Rzu-anOIq;6z( rx߅ euPvIɦ7聀t>G;_H;2ʗ6 h6QװxmR JQUbTP2j˔Ni)C)HKE"$ӝ!@2<Bq 2oh80,kNA7,?ע|tC3.㤣TiHEIǢƅaeGF$ u2`d)/-st{E1kٌS*#¦۵_Vu3ЩpRIDr/TxF8g4sѓ{%w .ʕ+84ztT:eEK[[;0(1Q@ET0>@wY)aL5ׄӫ A^%f+[`sb˟(]m`F3 W((!5F-9]dDqL&RΖd}})7 k11 K ;%v'_3 dG8d t#MTU']h7^)O>?~?_ȿM4ə#a&Xi`O}6a-xm`8@;of,![0-7 4f kUy:M֖Esa./zʕy[/ݩqz2¼&'QxJE{cZ7C:?pM z*"#窾+ HsOt۩%͟A498SwWv|jNQ=-[ӓIgJ8@o2k'Hr~4Z(I8!H G8HNW%1Tќ^?'H(^jJ=䄸-m!AdEږG)շj#v;#y/hbv BO Iߒ {I7!UՆGIl HƗbd#HAF:iI }+2kK:Sov3b:1)'A6@\2X#Ih9N ̢t-mfeF;gUаQ/ .D%ES*;OLRX[vDb:7a}YF30H #iSpʳ]'_'ĕ -׉6tfЮ$zͪO_sYq+q艻*vzh5~Yy;,DiYTP;o./~^.6+zZFD& m@WXe{sa 2tc^XS?irG#^ŲDI'H_Ȯ;RJ&GT.Kwj;of¬zHmmS2ҒN'=zAΈ\b*K ڤUy""&D@iS=3&N+ǵtX^7ǩX"CA⥎å+4@{D/-:u5I꾧fY iʱ= %lHsd6+H~ Δ,&颒$tSL{yєYa$ H>t~q؈xRmkscXQG~gD20zQ*%iQI$!h/Vo^:y1(t˥C"*FFDEMAƚh $ /ɓzwG1Ƙl"oN:*xmS}V<"dH,^)?CpҒ7UΊ,*n.֙J߾?Ϲhӷƀc"@9Fў-Zm1_tH[A$lVE%BDI yȒv $FO[axr Y#%b Hw)j4&hCU_8xS] _N_Z6KhwefӞ@蹃DROo X"%q7<# '9l%w:9^1ee-EKQ'<1=iUNiAp(-I*#iq&CpB.$lٴާt!jU_L~Tb_,֪r>8P_䅱lw1ù=LAЦz38ckʖYz ~kQRL Q rGQ/ȆMC)vg1Xa!&'0Dp\~^=7jv "8O AfI; P|ޓܜ 8qܦzl5tw@,Mڴg$%82h7էoaz32h>`XT>%)pQ}Tgĸ6Coɲ=8f`KݜȆqDDbZ:B#O^?tNGw\Q.pPO @:Cg9dTcxRk&%])ў}VLN]Nbjgg`d]LGϸ.yҵUCL(us6*>B 2K^ sBciۨvtl:J;quӋkKϮ듃ԁ6Y.0O۾'8V%1M@)uIw].5km~Ҷ綝R(mtV3rșjmjJItHڒz>6nOj5~IJ|~!yKڮ2 h 3x}~ے4WYr9Ts] AA$ұ}21;qbUwRK #}u'tLi'^Y&,mCM)eu㠥Ѻ\a}1:V1zMzT}R,IA e<%!vĉq|?mtB|A ?dXuWLGml?*uTC̶V`FVY>ECmDnG+UaKtȃbeb筃kݴO~f^⊈ 8MK?:mM;ߵoz+O~e3݌ƺ(ܸf)*gCQE*pp^~x܃`U'A~E90t~8-2S󹞙nk56s&"mgVKA: X>7QQ-CDC'| #]Y1E-$nP4N0#C'dvܸȯ.vIH"ŐR ;@~y>Kv{) 9AG ćͩ$.!б~N8i"1KФ\L7/,U@.ڮO?mُa ې!rGHw@56DǑq LA!&mYJ*ixz2*{_;IYJXFfQ* 0kA".mݡ"3`Rd1_u6d逖`7xGMf}k/⨼0Κ_pLq7k!dT x삖A7 u/~&ӄMu.<|yi I?@)XJ7{ޱ?Q]{#\4ZfR-dVaz./f+yGNMGOK?2_~3\z=y}^G$*A! IcuR.o=MZ9zu b#s9@*иrI@*qQN||Ix;I}&ݢ6ɢ}{]x}_o>Mm8S]~(EX{޹na4p9B$W 'i|NGZV yB a[s7 IzǪzd;-s~C7M>%} -Of;M.~P ؇'k01Ѥ1HIa6Pnz/2ΏL+NhBUx5|T!Fa[|өSfFHİd/D!-Ɩ:;v8`vU~Il2;VI]|Lu>'$X(6 b zQ [;Zcqߛ6xm@m;Uyzg%pg/ccoͯ l 2JDLЩ LkJu\!`甉܋)`ŰV28!;NfHoQVbapO@B59@mޯtG^n阬iԴ.2w⠪Ri_"6X| w,?V_3o}Cwқ5k7:lwvmi8WLfT18V 3| lXQ!L7,RPE08kwuSEAVخ m3 Ō7$j1\7XDy&?Y\9Ȣ{:${1`+vour?]8=%Ml%İȖb7AZY阡ǝRS>r,CJYXzv9[ezksA`<dkON__ULT.ÔD[%s1,jЅ@k0Ցu֯+ltjJ? ǫUt'k%E)OD:"ƯKS1z2"l%^NEN hxndSD,Xt]1Gm*h%)(=XUra^&6"BzƾH( ."u>.,SzbQ!g:l0r$ضz]'.!-r"1MCMu(KP|еcLq_,:4F%_"zJRQBk Xu|`ot.GSGd oEC>N:S Px„*ޯrޔ3>Uf1w;mCj!Ѧ/V{r&ȑ|X!|)NuUib0/ qg%HW}\4S/sgw~`zR/$@8VU:rS?hlJٹBc5S&ڝU.56̤+ kj· dMVwfay_*ɨ7?9mzu7ߍ|+qUߤ^oåyx^TE.7]u+wZTAjSlQBB/EU^\}Z=k-nn Q7WyTz|nˇ _qq~q r㖟u-| +[~,9nY_ws|w.dަ'/<M鶯4<\h h[$k 1"j {]]O,dE>-Kj'32Dz O\!f3K.ax):.aS qYқ>W Rlzy ڇgkE$"YDE֠Z4xM%k.&tLv;Ull- }C| ]x ĉjlli˚| Id Z]0hdmD>hB֡#-$tWN?YN:6 3 xH "}C[ kӨOAG4eրG&EV$Ժ?ϰ:N@VcyBFƈ?H(m>AjSؐd 3]%jFP>TU?!$VQp7% ' c*K "U8V15> =҆#xɮDپ U`۸ہtW qBPu(DihU9P!`NHɩ݉S-^pşC|xz$BBRJ@ѥuȑz.#&Uݠm?I.O6xm`Rs _Jt@U8jxɕͽf3[I3ů,IR Ř`QbmүcH&CLlvLScivG'հu7Τ!ljK=ct"*tk dRM˜6Uǖ_ iGG\ m#Tvmەv|YRiĹ zm Ɩgv{Xn|S5E O0][&8NJPq^H2Զj{RC>he1u CWL;oG^\ X5)aRߦ[_Vs? {bOe@0o2cވu~-B竟} |23 Z.`oqD>t@N _7ct0D f"!!N\-8NJ|r^*A cigst{7=xLOd[s)t|2L++XX߇ ѱ LKV:U}NU*7v-zߞ +Ōa`HDN74Т C>F}$A:XBgJWqLhnٓۓfl8fp*CDrc3k.2WM:U~{N_>!Lc/rK-vv%~ =WBX"XA:#u-9`x 92$4_>9WvT` d0ϕ_\ Bؒ G.}ΕU&4D&u.Z9c$A$Dfj-ء^o$#OȯTgرBӆI t[ 5)l>Mdc5u= A@kU#cJpրj6M>dUN/aF,M!:Y:`ƀ$56%}N'OKx%'t+NBJp`tɪez8|hUui')gXٙqUhhC%&U0SUuvͦY;c؎( "YJߗ]oc(atGS1px#S]MF˦NJPYDX%ܠꡗwhl}) 8ZFKiĴX%2@Ki+^mNf""Ŧr ~[0.,Xe TpUZ/S[*HyVIUI':ilL t,O669Ȁ,EʿkڍfC58N!j.Pٱ ҋ9W@/i´KxA|HkĘ_E6=W|$O -{$1h.A`AWO@$52F$Rv%S zp&mߡe8 E-Q8W70up˳ AMytdx)?m~yj[j_;3\弁^bD5-^〩:w}[ą8dBմVsrAJsT=~#0t.P*2V q%sor|.v\sfa:TX%;3Xl= ՈC.5Wg󵸊y!1U:pUC4Cm-7t]斻38ѮIWί_z7e&ӃCx-w+5É R$|L ]I_'s%iUV#`uDԒâZ56 0-寶/Xŷ%r׽=nl-ߑ?@*la:pN`4Qbt r Rv%!jL1N}*ۈGjek3X$$k4sR /:ǀl}9"A? m~R:aV޿TQo,үȴ7Ij._};Gn\rW\zk﯊溺^TW\}/J_M1zY4W^T77WrgYn[1W_겼c!R=<ުBBooЅB h/|}g`[QjaXMlk1=VzpO֠zh.$@bhD{q:?i;f;VoiBi~MefZ7F >:/?Ac 1MI`z9$؆DT!u_z>p%'3JQK-͗}I+ϟYSUגTa>x!8 L򾷬4/m0NE(G-^䀕#{zӡ4i̎QIc(<ǩJi lc]oƲW_B7ӦƉ4nQKR5_Cןe&V̴h=pv3cUM;9 IVAr!-h{D e\y6dKUa2,c)JYUIe:ޝkǪBRĕȲ(J>66BcUJ&\.!$xD UASVssk*8+nOOyPH?>Uaj/o"kGq*Sh-Ń2R,& qHi ƪWay] }U9k |34HFwc;U]A_#\ȱ ^`dx&Q2=Mj>zLD0^Lȫi_b(/*xp^ {:%eXxJV锔?:u bBƂƗ`2Z{iKeqe372],Yt7'_yJ4_6z?U#&,ODuOf9" Z8L/Hu\2 M߰^5H|A:y? ; ;M]|8+K̶Kdԣ[ |( ٺkVp]68všT (c_:tr.dQEzvGwEVWլ]%XK<ߒ`\-ʛGIbgjc铥&ݐR]'h/?* ? YfK10oUyTO_b:3N`)| b%䓨/q']uw~uVh]`pA!8) b ~A7u锥Qv*P'g43}gz{4o % OZ[OmTe6ħ+Y1{i"1b&?{9}?G#l!Oq^')8j++H{<{wq!eSkϚ4siE`gQZ'knLB-q::L!a֑WMqZwW3,Wʀm"OqY7C7|}w=a}e!]}MvqoH snX |yS~aCz|߻B/]|eƓ'oIMͳum#A }9JB2!lje\1MJ5nP$wv OX}Sw:P1Y-}U7ɓzq )Y܈ w_Iq|GuaH:3C}plEO`]+&>{XmK_`08 z4k_J[L5Ql<˫:w9(r*ծ+q$PPOw'0qQx*GYeRVaw]^ydݮ0"17yɯ޽h5B%.ię_}D+ , 1pLy^khb̊2g`fx; 94(P~;#dn2|FCUMUV% ELh1pwxнZ[-ah!pUa0Sj^u &$1V (mOs9zV:ϳ|z"e( b6˃Yt–M4Gc I.ѹ 6ԾI6<gxԇDݫaئ Fm#7%kkId5޷{8rJ nJ%Ϫ{睜ma`"Ͼ?؝e&RVWqa0ۂT[؞nH6ϳk FiF7=9☞VՈIZAXUіbu eKލ} J2(k!ТR~id>A4ZvETB=uhp<.h_Mp-SPbN\a\^т875 D Wy?DwlC5r%րVE0!ajQTM_ib]`)6ElI-L:WMW<ΖYUgmֽk+QT^Ʊ",.߼jm;ET`)ڄK55&! ̪LL,dUkMiXz,HJ ̴%V'񙶥& Ljr^h+yS-`nm_h(<1D^^U3@aǛb8)1Z7};] wAn G-%T3ߘ2Ǜ2Ù2j[JbK%_xQ扜U;nzk)c$>\9Q6Td+I,IMM!x-7]KjvR/TBhp`|ߧTZa >gνݣ4խi0YҚcrՂڄ-G iّ B/;t]  ] Ow"2Yb]{TAZcApwتel=U0$z TCD:S3@$ٗ*d[ ;GCs!]v==\ F_y۝v2 Mg ZG꯳n˴TGb %֫ _M=@`>T2 X9CUy Lp$;WuaU.r7q8U@ʂ!g3z56Ϧ0Uu)T44HWرa ;Sy]*+`6kCC"oLOqr$dNVn&p ]O_b!SYTyM[?_nZO6L5SU]nyPYT֖Xm[ψt{HZ[ϕWrhoaэl. RU]"O7ʐAyեWaGiIyO=.ϤQP>ZKr}Pec"f*/հkݏp- >hu34>P|>xcv="LPv`!LB*@gI xȘ}2 `Cxn׏`} {zY$܏z|ߓA# /8زp6oc2o?w{L|9Y?`k?Jlko>0k-w?&¾[(ڗ5cn<@g'g[ՠc ۧۓ0 u92Di섹ۄ! r. qL8k2b.??%QLmڣa6@l3#ݻى0H6a"JBKIۏQߧP’<K<]Z:ɽ=X xA9.QS)f~CMAPtqBԺZcgsǨσNg@n߱xXɷ[ ҋbm Xp`O}vp \=_҅h"޵3-l"6m=`[$'ڒN͐,;kJvۭmI$3$CC4C Xqu(wgӃ׿P1 `Tx1k~O%a2^*2oP [cus oN=)@1;`|pd*A } *a{ =)c ݺw#O鼼{&P4(DtZ@#/qc8 ~м3.(n6C@p,1WI4c_gʸ™tgGC+{*ƏX`ݾYZeXŴ[Gc}>D n ὲ" R9XxPfoڵ.T*XSIӫ@d~x/N 7zʀH9 vOBzHls!MΨ?eA vb:"E_ՙ5PXIn:J՝1J`:(p~(vP<LCEqq 8v5Y'BNír1}M^_\[nN?vT0<* zqM끩 .nIyo$I@`YgQnA⦂s8U8Kv'Uw@UwxLUvʕ~g\ۻ#[NyN)98uJ$Sƀ&SMT,fOnї,G<3{-\S]rء3a9Vnz"ʙѯP*-dK 0x=1qz 0*NșB^n=ۿH[r"vÙBXkwъ)C7sx hEl +sz˹E{b9U>^ o _L c&(nup%|s]ZfϢpmVpxp1xvn[/q<ܶ%JP}e7BZJg msE5~@yúb&y`jzYPgI$Zut(aΥV۳}+g4RAؼRE(騣#ñ Cj:#(ca_5覥3S7,0ea=zQ‚c0z42GHF#SOG@ba˩JRm?wAQL}xr{FYq?"C \ݕMkxLl/XRRӬCOeٜ2) >L~|3^CV<蛭^AL{]cŗ"])bafl2؊aQ膍􈵮J%- ڈi,{Hx5ps̢ƃv-Deۮm)ϵJC@q @qєEkYl2'LB莳*格ibușIT7*gA1cND@.(8ꢰTFP9)>ߥlQkVuAPY[4Vi,g>,#惪a[+g/6זqt}ipCٕ;0=(_nʮS6m4_|⿑ h_eץ^){}=|6yGԇQg13Jh;Uoi]dKt+GӀ-mt$xgk-G9z<ӊz4hkPsXw4`4p_ȫ"u7˄[jnO4B 5!tKmܡS'k[jnճtSvݦw2t bҧJwnALZO#ڑPk B2l Bv$mA(_&oA(ߞP4B- ޞPi;joAL:O#ّPg BeB-u'}[-mA=v$[C.maT P4IEǪ/BqϨr~vuwfdT6Ih9;IfW|F C'Rzc 0eLka˸hjR*V9$W|r|&o>0uA<.ֵ/lnM ICY? k0#HʘK|8#t=? A`?N+ WTzZ ʛ ϒ h3N鄢a^d*} Жi&Oy^~I: va\ú j- E`W-NM]Iǡ) Lz[YN0l+ҋ&*4dpFF0Ceg,hs"#vNb|îU#so3SRӡh݅Mb0PA}"Br07 5KTx`[w @{w ]"O~'ѳ 29:WAV/YDqkH BkGL>CBiP?Ȋ"O,NgRW0A5p,b龜ᓱuA2ǵ9B~|6ڀ?ö|>L7rtjzOԗaTTOuW Ov1:-S;o7 \Kt ن=nwbõ)(NQ0 x<}c]2x#?}O+0ZWk.G=(e QoXGS,aM,NAׯJ2֩huh]V}˱8hGɊ[]PCN=_$"35D-*+gмU Yt\1Yw`8,v82,58e7MPDZeTP h Wu8"CsV)BP0f8F}4ikEF h67U?M.[T7<@l*/$;bpZ8kP8qqqxv$l|!]접ʴ-Ud~~!ˌ[l[j[JAa{_39PMtGP^ SXb7uCĭ4ih[jd4/J3+@em`m{" H6ykHotuvn9#1 +"lyp'w[`9#Mš!َuͣ#,߰La&%sgAU}eGee E ,)BZD&|>A+轀n@rTvjGgx\myze z.a.a&xd7\ Y[˟С&=_| ϖTAI0ݠ2w.Bsxw&jSlg)- Wh7 sJWSJŌ޻@Ctf?򮥹c9מy?TE*lOUp+|o!8}|&=9w\^DA~Dro;?0@{?yqD?$R>矞#_M3{>\dD.rk_NqN^;X{Kp.~6?ͯOP nǡ<wM=SO}i~t=lD?C(܂dAɫd2(da>AXgʿ כ\KKqWF(Py-XI ^R1]Lї"=N1XcekКfE)Y 9ZeILik:D3MtY4_J\:#&jMG! aN0b`-IC7*@9 ^7,9nqP# ǖ{}"q+h 8 ouQnU3'I3?C7V]J;YvAO!Q8eajb7:?}#AЏujBX.΀E-#px)A#$&Ӝ29N^9>h:ӌöVpc/sU\8 D^8 2"itn528P[6pYh,KoR 7!uj.nB$~cX{!uGT>]ٽxrޚh >|d|M-!+*2gD%4bV, vLZGc'C%V'b4 πz.Y ,8gb8u[RkIgV#\nHptoQuwu~biɬ8-qՃ iU{DZ ZXU#İ~uTo(?]Z[:D-UaxH|ꋮ5N@2Zc)uGk- o"3Ig{{'ьK9UFGVÀ6 eC$B7̫Ș2( ZbeVHp3,Y|6;I,QJlǼ)|\HCʟ4a:pY [h0n;@Gi*z8Ĩl+F*W,)(rW5SBBfHpvp5VDdFj(t1 F 6tbp*:WcA36g(Rs5)nj{sorpc3 b:1bȝgVKDB4 Zpji8H:SZHD!`h3+}DDVpTiˆ!AUtQ Dɥ]L&;8#ɖXdmQEII$\ĤJ1̋; g(ń" Fnh`,MIZ$z ٧2, #$n5H"m=sgcTvcS: ރWyb)&xfIl4۲'3$GZ({V?N58S/p՚^GnXޚA#$~q^h͏4yY5@_Z?!+-V_CM(y<&ߺ̬q<蒘U 7!Y-9yrO>=cCht@{Uu1>yp L"RxkͥRpE lY>-).])Oo8#rJpԫd hν;S(K Ͽ2q#h#xaɱ\R9d-9BԹn I.y)56𥉉/+ִ TeڜVb=x;CRbwE57b+kWKbĐ38vi'Ԭ u_. WTY>ewjdl&kUDEXE3Wo;GZjNaJ>1K..SHLUU%0l Gb.z*AG͹JH3apSquC.z3;DƥF!e n4A݅PaGN c,Mf0/@O-t311ݙK$#9'n (ӬcV$w==^X, ~N:㵃USL +r6ǀ7@H:;˷"U 1=d-;yUyecO"/hNp&~U144?Oi/A]l3Ѥ;%hfhlED֬6ev}.}ts!p.$l5cFGqed'j'yb3z jr*Rh2ULwJ97V]hĊ_H:S/A-3/D,B\`{Y`8o1˓|-I'luM۾) >x,REI%<4zo,9 :;u}i@vE:'Izj?D5̅iAD 3kC ߏLQIl/<@7'^/z;8j<=(I+Jd8=Vs]LV:N\oyAI%c.CoKC{s$8&Ըǃ@\2{z^._vvp`w)Y?`8t`i- Mɼٚp[qS^ 9֑9ȅR~4 }U43q 3T- z{0;ؐ7€x%g¦4qL'tl/2ɈDÀ.W+QBLzilGlyK$8mkx.EƦ^kWF7蘗0ǭQ36}t^ó$8jwUT>E;:/`R >pGzbP ^=3 SP]:KbZ_?] E><)ݰ{/?ቚ'M{y$ /-%Tia" M712j `iC:-õI-y fd>e~$@Qr~]c f,1!֒ u˹u(ڔSetf,'"X!˯x-<[\_Fcn9@x*fY!)ۜ'q6K[fg8#qpI7-v xŔ5zR!KPI^w<^>7 o?E3[,&\?B ƾmK=L+WU=9"`єb0dQ`2O,$o3Y8I tZ"Q]a a\ UA擻m8?ިI~k:D?5tר4C6̚04Y K4gC &EK -fj1R/8Nc8-NkNDEH=֎(Q|r6 E~uH2re%GR,\WƅAAC욫aZ!wmp4x?4׷I̶?z(9M3,Jv"ɔdvepΎF&DyVޯw2fm Be8*t[hiҶ> $Q9?z@V`r! ٬ݬ!@aښu'LFAr1w7c=s"i۬p)\wa:.';9퇳`8h'5u:}yPܨ.:|^幼~ UuMy-,~~8r%Z$"O c< wyy~Qkն%/@+Mp4a(p[FA4%B%UaDVr-%C3 C: 3 Lus>\&G4#Ee4PK=sy oGs|mr<2o=`]}?^z$4Wڇ ?9Jx}.uozKpp;ShpSٴ/b3?YYiGUFw|vifd ԅRWuz(@{r2Ӑ(=W6+Bq,I>@_|#tl/@:I&vPu9Q0jox!_»F DσKIx<WR[] m;0p߁9Uuw(&Z啁/,]%*=mv:,x؏okFp}nZRVWx}X@lP~IM831%Qy(.3q]oͿ)KQJ=n]GΕȍJ䪔B5W@YQhwPՕU<ȫ¾M-w{m.`t$g[-7O\2ޝه3ށzڊ3ZUɛfRdVku<^i/OO+;-(.f9Fi.Ch |E GM{!_|]]jߔ|oFK+GǍNj؀/܅3gQDQ2g`Z9 jܖPKjZB<BI9Iu:3Xk!$yK[ML4F9u@5z 뛡BUԷhZGU0zJAcU4~VCjUA0M20M[2pW\8i4p*iB@q^ۍUX6,hnD醽U|)-#6 %6E,eu&|v? o`(ݻq4P7N0*b+;0 ^ Jh=s 6Ԓ}7 +ge I睵"WhY&hIb]Pe`e0cXPeyOo?γϮUwQsxWr( eyۙКqvƅ]=5S'J ej;^jȽ̦\cO/.V5m@{L]Rsx8_xCwpXIK`eWQE;Vjr\͗7%aז7y G(v}N@wH)/),xr¹غVo|I(WB<"-rFIRGRGQR^D|{mQ8,7r`bI*ɏob{ n*vS6V*G(;c3 ~P?V=BO,`pX rFve._)C<|>ܕOTp+rwEBSp ;?( An5Oϣrgd[%v|;-[OµW v}Ԗa~L#'82'ہͣݧ|$ W䏽:xz|W'pbG>6rR?x3?;*: _ ̉7c7pb ڦmX=f_@3{ C0{K'~@ۏLu9ot-v\<|G6n8g:xA__?}.JюąN^nt\C?{!$ V%o!hj4HT#̦6IrogJ1S̅G)oE噴I!M;焇CҸn7>˳V;)w;dK;-/˺?벮$bDXaD`ztJV!E`L*eF,0m֖5]B Zv]r8DL&jM65&}kUs>uzawpa0,gڦ¤ P C*3^C<1vHc4Β!ddH%CPf Rm)Sи YSrPKʉV|4B%iG$iw LߋdJ%CJf9ȑz))27g$jaqk g6#аr0:iG$iw2tQSH*٦C`ڳk{ ~nc|[6)`mo(./TzK9 Ħ)h~C4JlvmzV+|z:)nb5Cj5\cC;G"T@4}*%TO.jo4N|m毋hbڝk ,=ƗĜ! oOBvo9<^k{ +^~L!k8pg(B~/ꉯC9),DE@C&au*&Q?opÏ@厬!W@ 8o&={SoW?g>\ {Wޕv_}~` xj bJlG{ fr=WM_F X6GUB6R~l w_H'f\L~;2 @M~0x H(_02? wȯ7~ :.JI߽ai5^z]KJ. Ķ9CZSf}-cGjqg#rC)Ahg"&"^RQ>l'7 i#,+~_)z#wck|"ΝV5c}>t# &<ߌZv82R$E{>t}@@Gko6QOfB&[,3͔+|oB|e_/@?Ӓ}I]dafth2~-w՟檖Fh}]_5_ܳ/,pᚚ}S@pmفMQcxHt>a\N%] f(X*ә":.LUZ̆^nUyŨvLmӡm)jL%Þe"%O[~id)q ɘq‰\wNT B0c=^b$$a5 wj(T` x渠%Lxʄ)E6MSAP `0 @0`H]bSόhg:D_x)֊M"oI$] #&IOCڀA?F)3@jf/Ά7&ah6v1˧o;2Sw(hӶAZƺ$9eQ>$hHAcp&`'1cL6wҤt wȃof]ߊl.؃o]9-,nՓS]e7~%jYCl5k!qb` H&!34s/BBhbLB8I\#؍WQ$>G^O4kN8d<#ѧmXYaɍ$}DA~q~rֲ?@nx3qU0ќ.BzXO/.n kRӽLZu I@r5tkHX숷.㲖Jxj"y bOhʤsZ%Dd %y+N3giB ]7NuY&`j7WLh:6^@?݉i˙È0raNd5JLr,Te&()FT3JBGJI"pJpB@ `2\BYsZy)RF)NڒJ*&ibFj N2%TP&Uq531,$.mUűIhC\J.d)捴JR0V -Iu?StI)P +e^HNՉZ$$0$L[b,poI*z'$O&yL&- R1S4&<(7N{ ̟". FѓWBq,pZ0> S! L%N76@FLPBʶo _Z<\|eL6*qHL/$mif`v|l&`j*7 `3pu1F B>qФG 7Ѽ %7"mB_F?z0N#4/$)B}:d?.A*odH78T[ m';1љif޹iB3pIǷwUx FXЌ Ͷ +AMЀS)=jqK#0 l3@!;r Z-Hv78fBoU~xG)$:`wTBΏ,tCpeǠiZ ֺpqYa Z-~d<_]bQqCQZ!|>uPap9Ԗέg06K58{ m1 w2lZk+۽)DDb#$FTr+PiE vwo!<`DqE[N# w'f!OQ-erzxC%GǸR U#F`R: `u#v|R)ѱBw b/0BpIBo$n?7?`ӈҔђZDw]Rfŀ=W+|94BhDYik0!(Ex{V1U$M]JFd|*0$ `u<[w2@Wɭ/6+]` ="_E+.J mVVx4X7fL ݻd{W%4e[Pw"hfhT,U˶ šj8,S͇DU;V͇dȳkm=) >{0&\SQJ$ \=$_^dZ܇Y>UCUDߘEeirh~&m\ZClN0p~t.DrF>\^L^αY OÐbqP{*ddl/a\{DoS ~>˲I1= W"GBo^.._TT`F$^ly`<\){#i%h\6-D%NT{|@ʻ%VrpL|Jc}vb2IGde|dXw h`bחW`yna]ar kŰ 1&!uErB׿Y~Yfe?\ |<OK-:>wa؜Ob6p5tP9`1fY<˯'iQ@|,4y^˜OIK>+]7'%#`YTK/ˎ`@P-AWqRE&ߣl)^ ՚3+!\SOEhwfoR3ކldH-P| s5Zp<)쥛8Hz2qzZ@ M;h}2*Q"ZW{u(ԧ}I|/oJe:,ږF.qy>-L|"V1_DC mÁqn8ndzj0!uha(Jbuʣ&IObF,Jý&VLk[E>"ojei"))Ǐ6.[k:KtzOYl1`Q6ӱ ԺVm.4F8cV)np1Kcfh#8I,#PcpVZ#l./٤h/Fj\7y2Cךg;AezM˯H9`7q2Y;va?)GΎ(w1x_ AV8e!*/:yAn]+Z$ٖtH)-6gmV*q]?ei0V' +P!?n<`../G)|8-GiTv[|OT>]GgLxjHRO矹 U^z_5 dje兪Kf/Z n+~\k>b%"41]nEKfɴX[ w9~SiYE!5F;WSW[y Dv_g/Uu(_gUssZs_]wf Pp181*E,qJ#68Dk'rYfS% dO /V~+k-*$BWRP=_z SY~.g|2yp5]ZX{GrpJ~uX}X̮OG&$57_&(VW+5!QEdDj<>o;cS}/P^+GZbUzyK]fw?8[U[mws ,O96ͯVwo??MDu7_֞ 6y }wæޟGY]Y/ݒnx~rFquCQr!Uq2ă?trQRJ,Ֆ {k7_v*!]Dž:0#1%irz(-w <#2R P8|ޜK#:#ɋ(][&>tP'N9o\poS2w{ݗll>]'Rq.E9e'+vLt}\8kI{~~IC:io0.Hbb "ׁZOV2Y/ίJlŏ}Ynq O+3@zzKӦܰ I-_8gQqVis%ɺ0S0J2+6`/xITXe_uZ\s\x4 i`ԤE6_I9@q$G<\91eDf@ Ƀ/[RedvG"r&S^;|^_a^|!zI@*kKUI1NN8ݽE>RNG$;%pdnj+o |, ND&e t~̋mHf@;~Żm^ ʟo{xi#{ߣ5޿۵hwx?q##Zwv~<qIn̶}T\h 0`(F(u5zRF#0(b욹}fZ.4/zGyc0(v⎛M׻ZxXy;+ 6]'9#=gDkbOر<(!Fw?xc;!;'!a>bF[<7{8 ~uS 䵣 L .F`^,-PL$ NG>.B5cCX%OADFH0d)#OՌyfX5<4oJLM{.18'L" .Z(YɣNʑ߽j1uT#Z1ݡhk?\l΋Os[Z>+\`Hle44; y`"fMS!$ w v0', Q_%pƌx{ƌ"Z;]&?,j!BH&)q^%)YEVgNA(v`.tћ2i`prmس4>(c.a"c{c(G>VfggZ^OJ[P ;%OHه ɔ^odgCB$-c)CW>~-AWZ1/abH'" 2+N2WpIC2M7YE>Vh * `"Rqʠ1u |j'L9ثãrX1q8lƯ```^nRzy '(*vNc*~"3e-L0%cGUOt>jZAm4{J& #ʇ$\AΈGqgX 9qԐp(uDX`oVruvT:aGFVn(+G=2Z̈́zY{ŕE3qi[˗/kuDFz"6rDG^Ǫ\[ EHә P:bԼ|7ā"jKŬӺ|d`v=2Z ۀ`TGFˁ<.gW2 hİ[|3y|4ǫ(+bVdZkB==,DژVNثãrX;;IAπv=6"$(fp}q^~ԭc'#ޜFqA ?ʌ6zO=C4G7EE>V岩 h,;C#1)# Q[`VB_?)̣]"]m'4`)$tJ@9Kκ)]VL^a(4}Mki$9 V9s ʛأ]̣3DX nןvRC+PG'7|\[gWVAGLzO6' +B%R F@ȊLGn_Lכo`)-ǍTM]ρ9o-c q@ڣ2(I.`im(ZozǪM:eH,{ϴBﺺ7`F*Xr0}r`ŨuNB9%*Y\nX.n>}>3l2zGb*_grdCY@[ Yh=a5 7iOO4[,Ά f~̼eԬ'IEs5™p!g:F.:n0e$-y WG } : VyO|%cDx(P#lŃ8TW wfNO"IGpݽYǬpiIBTI0ī,+4`~x"M+ {"=ePk-9rs; _<#ea/x97n #h(8g8 ͣgV! օ"Kt*3~VfGw ѴOx佯kꀎ|@hiiDEO-FCy}31xA &PL"y+`}B8^fr!tZ,,`<[q߮!l=c;\"j-/6zd4|)LDu!a.;&&.pT Zw͖kof5*pڈQL5Pcע$y{G^y8U$ :7J_H_W<G]R#*rnq@SYG \pnϿmU{L] #%hK$U8irK_aow!Z5 NMƺ˵I~e@ b͠fTǨƉA_n@"{mϛp-_tzmnivHJ8k{~d˲-PɺwL&O A5lK0P$asg 64vy{7RGP=V*|mSԝ#<E=u#6%cua~H7-4.ϗW]K4.p4La~jg( ؐ *x@_R>\ӲNFեn1RxI!qXnթksI8Q 9|xsFOeqUA8WWewdڵ:?pAV ket>ia0#:Fe7v0\J O#"#F4Ud\(kЉlW:RjƘ7y(Ky+К!0b8P[fm`mqu]ƀ_OOs2_T؄9MF?G\\'bw ӡ"G{HfV(f55/ԂՇ~v~x:rPP( N57JȄ1K ;"ޭvD_ϬG+:vcF-4=%Fa"r.{݌^l#U !X 5D%LԚ$ !ޫPVxU1^-4F\^*&W5[KrfCTI Vj釆A$KgJLxƘJ-*)af v7ªݰ]ל?d5z*` ؤtSKmDi|8` HhN8QMgtF_rcBc\b4B-4FWz=n>->!8KѨF6A,*U* ԽFΦ ?KxRaYp{-Dkĉ^st}7S{GiX8\>oC0c:L 9qܜIXIQdiK !px( Y5N. G,VAq>*!SG rDx&;T; 6ڐH)r`6ZwohT G[A_rItE"0B1O3 lU&O\sc/3Y͜d6PԘ i쐱h+Ɉ#hy*87TDCl>~:zwiҨYGEK ,.6tXXGx D( >ײ) 0K4 I ͫ8n^a/dyأ"F'+Ў9w}fʘ!1ǥð׆j1#˙MըInEkG1١B_])16.u`)VY^=Źk)*_;Z*Q__^UN^F AyG#tˣ η)0nQؗzujjD8=*<̳"-;ױ"pS(됤BcwUeʛǧ {W./!q;`so]N8Mg̡ J?.g.{dB+(s`iRP4㯒((KMW\ c .lX.!.cX SPt ^==Ύ&A8q|25j-fݯw"qC z}`Y(71-GL|ԯ1{ݥQءp?h1^ #s",B&u+% 6(|IRGe-z^*_dVwgg Di+qI졿9A2Ĵ'lKTM;daH᳔=g316Jon5M?}S*y{gl59~j,ָF>˚Pœ$1F%<ϿUsT>kBPhg7 /͹*\^@ew5 EdơE= #A ?MV*[Y<A4nI]&sorYN ȥqB#L"㞖QY9fghb QVXc$xSL㤮 ýK@z8|^|OY14OJN7(ɎѾay (-MnKp:"ubNYsqYNBiDJ]wy]@*teI %ܟ̢p MϏڽBؤȊL ^GtUiEГh^w &}Vaɾ!19\ W!U҂٪vҀp΍o ۔@)ڄ ^ U~C蕮SERT >tR-4>K)FImDb]iC~!BH /&23?U~ywsΆl3MKF51IId -> w$]z 7T!,xU#^:T==3GoΜԵ=3' RP7gU)xIs>OH* G_\(3s,_gjNB#ߖ#5K" s#YŬ9*LΧ]=qໆekѺt{#2qesRɽD^}xO5,g9T* ?\y钒4jE5q_|?Ү~?iMINl `I6ӞԱcVmĆQ &^Ń7bl+[ŶнxP W.$ g]ĨD0ٕ,|}嵃؈  MT;אV(hzmE$fk&*I41n4zd}MU:wfD'Gy)Mo*'|2V|JHƁ֐%Q l'kUАiI,p%\$^ǻI'Y.'eaۉ]Mj2 GäTNI~&iO)L;ͭG^".QJ%^T[bk  pfܑ>/~-0 jU b.;IdnV5CoWI&ahŕ7h-ѻDQׂ*ShbUEc%Z*_4SW /ʲot1{R~ rΆw a0xaHԺ#"5zUa:HWU 'aM4Q ;ܺ*WY^Qqѝ=)_PAPZ&fkΨ-6~`Ok-H^ڨwɽf6Iyאh̘D SQfN6ޞ%!0Β0m2нxćP52R_\&̑N@s+Nw1z.!5+&cIsoaD3o$c P4Mߞ=-fl1+٢O5cxqʀ i\4Q b"JBm,:046~+d!vq"h'M\r+9TWP/% Ӝ)e bbu&!uA%PId`. Rf#>i>iա;>~ :$e퐔ۨ).j:$0HN.P#H_-5zR4lH̛Ҝ"LCc7#F%uHIxp=ۈC% MKCŇӥny]xQa:-Zc[6aJQ"(5aa\eEq4DQ^Q--zpOyבh4`&~瀝rKo-PWvr Ut11ӟ5cm_ a qi|9 ɍ˓Y}'wʩYɰ o8Ygi潚nϓ^ݬ󍩕Hm]R'ԇ]d+cR͝hބ!nk 断f| )N\ϼU e^Ng3dӅ7W^Wny;c٢wev{sCY VłaXhyDI,'~]WR+@(IU[i 7JȄ1+= _޺z;W øF Xv7}>Ly]-Ll)Y ^pWq)?D#,ir"U _Ax2W#FKc t`RE0>s!b}/. Kfh,,,ϒtԴt{ Aiw &dž1\EZ; u}Dh[B.a\E34`m,Lҝ2,=ͳUNg. ^U'^+yD$u*uЍii0sG9DJ1rЊ 21Td6EJ|jv$8 30sDJTL{6[l-Ugv;pr9uG6Tw?Σa#NE,4g>esa) xeǃNHckD2OlDcw_ZɳOGzzAba2IbTxAcZLUz(+҈X1NgF q3"$ 򒠴u?u6iI3zWA=|r?1&4vFn+|@bK}fc$LF<GE0ic@=zq$j":3xfPGfcrM Ruv\g!5eaqe̊dž21%xPGfcGv hxv2J,4)5X.??3'ųD (2`ʵ{5W|6WKf0c&-h(5a s&a\2DKT@Uc%ڲ1DZFi6||IG0 >8j GwMn"*d_dxɘ$bQr' 6,pm*][bfsqoNf@?(&RJXk_$[")7Lpxq7\*eEKյZ1vCme*YeհA)ZEIw"+t>zu@a@0`Q$iTT^}a%(EC=lQ$j=`EO9_լ'm!&[;1Uvg%z]n 1κE(U[[+ SO-8P.D%F5a#DCY]OtQL_ĬMtһbc/u lYM]ᴴ"]MJf$|Śd_ m]K|i2؜ntK~Xm?/޺]NAU*A;~I_kVu?7|K[V/]3,ɜ0Bv0mj^ _vԾ;9Q//|LoyeπWy w X&:|2.q.m΍.)t)U(2gIN3j4 0t4˴ /}f-;[0rw&k(ͳŴVdP7_lPd;R51avk_Nj Ƚ&.W(aM*iyuRlZޔ={Dw΄"LwUXan~vǃ~oe/0ffC ?zۃsp}ճ &X/WWjz>8,\[nQ;06gu׬X_ub|xwBC`_K;kc'Xzw9YB\~3ttR2&S[~xe(MAu]/˷o)oyC/-?5~G*4`Ϟb2 `,}K0D*4% ҍr`=q%Jdbm6=yCy=pJEYA<3.6HvX6,-%Y"xMEPD䩵9夲pв9~9Z+lxc"zT L!. GK!s0).υ KxfD3fB>`YU%N_b1jcL$áS,Sg2EI2Ae'T`WRoru9wbҲ1'ӈ[cZ/9PM;xm ʓ =:ahԛXD]+іQ$>3|]ٌω;q#?2\_pIPVdIju&P-gZO(6wv; !u]q-t7.2TS{܁ϩa`Ֆ&bza_Sch6⛦FǪ١@_ 2bd@1EA,`%BP0 YD|F3[z |oਓ6Hv-ğ? Y|^W5p~"k8:h  HsOE=)=1܈fTR3SQl~c't}]0\eTÕSfR Xi}zR { !L?AL83$T }q*ǐ\lf7ĻE< :ovfmcQI&"bGwB[mRGfc HSnI.^Ax=!IXl; D}(PG=?d?~3v&j@Zyo/>~ThһrDaU.O6_5_4lAJ<yb` )r2SR;}iclM{&bf9 Lj=Put6Jt# aBww͒7 %ebnվzxo 40Ir[!&hMr3,ry{oZ083IC܍_ǟ mhT$4žl6}< RpO3&* 88`&Lß_*F6Z{O==*mtg(Bsp}7J*4v/ThPG_5NU0$BSt ZU'SO}1>{I}"˲R>~U8F|$j;nXՄHWYƐ&TЈKZ<F> 1*bpD&*i m(bSnWC|Хv0%c"h )L@G$<9ё\MQ[Oד"*[6P4M¼}9@YD۱g5D[ ݥ.]ШRʮ&r 緯|0o`_:\GPkw2u]K} 1~imWyj* aV܁>~kNXLzfhN3# JWd5Vi֋ vqo'C{er.' ñt )%bIM$);ԡ0!z+%Ҍ0!3 Qϟi#PtIca-]AvrH\B»Dy yb \/]`ADA a. KION抯srP{M|k !}TB'@)Rfc (1_69LΑʶ1MjʣoR=GW&y;XR&SJ9U*R|{ MIa-_wM!F#6l0i$"LTU^gӪ> 3G< cE)岗3Al/Xګr^ՠi'xS﹋O^x lݯݦ>«/N)~a*@OW"Al:>Rkߐ.agX~9k͎^}?deOˀΔЦH< X桨1,1.glam"0MJ$LB#4t"UF^/nнƠ .=I*X{&&V*FF=*Io|q"7/PxȿhonӪdiE7*gϲ۪j ** R/7k70 hg)gD ywaJ2, iuL88gj B2 ?ۙp+cM w`w9?xq;i҇_7ȩ=`bFcLm~\g8_-{ř5>#<86]V͇[0oxP@O hӍ|mVp KI$}Rwd Orr26F?|b{"{B%;Q맞ٌ CU_ں?=75`~zw܀!GOfI& əc"8a ل4R5sFM\_g&dȾRY]/Nqt726|m`5'R0--1i)7$RZdİ7Zj9#ˑs{K$d,E1l6ךz=6"ɿɥC2YQnC!S%Ux.?/aMtoo)U UPZW>H 6w!Ӥqo6ѻFyOƫ׌q9wM^"(s'䎽8w$uܑ#O8ЩX$&1H.DWwȉAr% BIsn }5#lWy s0}S~nSڟOcPK~0mOKXD\=ύ9׿ކ)ķ .&& =݆hϥ.G>yQ)voo0n#^mz7#pSPz"!S%-1$et$Kt|e6ZJb}:>FL}Nr;֪ґqt2O5NWԓTj6XY#;=f6Qcx{hcҥ@ ̪X?ʠ1oo_EoOH5]Q/ƾuUtոwS8p `Q .Dh .7m@2&RaE "(DBF,FYne=[xxUfpj]"ynY E+"~h6mn^Ϧ%o$ߟqJt09z:? uԩ=9ANܩ*hEyH h9۷oGq8oYEIuÝyȪ8tp]~6uO$g:p_ SCwh}XtrDI"=ڨ2S Nq/NRp EAͦ-= |R:a ~VQT]sܥV9mZέѷ^-7Hj9NW5VW >Ym^_s!Y.hk;/w@1GȹE1+\%SYBV~}i.~+mQIm8s~N c?_|_D&D7gu&V!M?iluP4*"I-eREZXłİP8Hjh"" ;lȊy{2 \GH匤0Ad wrZ ndTG?e9X^dFvȨ &ev׶N߯: rOmط Mm .Rb<&{(h z!Iv` +1m6 Ƨ05C[Rޯ܃}('!28tx+DpN<6 a8I5E&&J0 dR1y!{&D8;"!"'@ yF=&j#2888}`pf^^sqGKV07ˡ92|J-,sCFep)]|QUCFeptypoė=;$4zˑdӐ!B-mw#28 x>D26c~!#'rVwP/?wN"aX<6q-a l-{\:4㛆ھ a("268uQmƬ;={CAGCp'pNFC8;2*SX;q\6a}MqmsQR78Yop(ֲDQJكٜL`o$Tƈ5T1.mgɊW¯ʣ.O eoez`e姝2(^A5tYfm]l6ZoU ^#Na9J!gLBJH"M.hXE:dTG\"麘i;=Tqg8a۠Rb!22,6Ū˛y1ze@(Nmb,2*S jj% ^/VK6ٵcn4"E&&:s놂ؠ+#AhD#CFepk{0l\{>F+Rp۱+QJV_~Ks"g=:8\r< 1*9G(/)Y4QdݏwsC5<ۨG{|~`;le do#(82  h>+V{~E*F8ֈDbbDΥ=`r%VHll(`ܺ 'ljT=ec6eNVP_2gݴ>YFGcW:u-KC$p\F.{keeAwȨ NܫgclC3F;9md2ƭ BRƮd6^e$6qDWw >>dbF`fekrX kqXi4XlPY0h|5u]p|*+[+- `hb"gـJR~[,քAo1׫΁F+{>" /H\!l{3ʖ:da~jAȷ `z|W&4(}DzK3 &_Qz[>w07 T7<>6M|Iu)aRkP^ٵK 3Ȣ~ 4;/+H=4ga<'z{I5^akAR{K87' r#'kWQ^eK(c8W!6 A`1 pz`!i1b{T^Hpks*#4įPFۖ!) 4 )Sa b6ù,گ:u(2Xؗ>K(!܏Y|dm% }ضkS K7̶PDBK" $tQ;dTakvKPuT#JM8yD;<:2r5.{˴)aaWfŁmkz\gytȨ $e*ȖŶV~}vfu U哀]a3CFeptykd|=L,CFepUR].s?:dTBZ'G^}VZZ=#ÞDvCVd h崔^(2re' &~\[,lyEH]T%H_i!28J!itȨ N"m:ڋ`Wf+5ެ`{*@qȁ}]ߛh{8Y'.*Q!qyQ~!28Jլ#S6-XD3W(4* ˤm&X.E. 1Nr }ƍi|~V!MJ6뛡Mw1\*=9k*5Iyr %5kg ^n>LV~ɋFT$ l1 )FTe x^f3eaU7a+dqރ/,CF]z:FM 6M0'd VuJMN Eӛw/Ў7** !nV^3GNR/^ޚAo6?n4==9YGVa<4)>6.GvG?Lno;0xzx3X1"(`G0Ը#&mRmw_e=5zoC~۩ Spj6z{n6Sun!~lon3C+ +ڼdsӻSfAMao2#kqkLʹJu=~w%mK  lxȾ&{ $>ep("}gI uaϪ '? In:n~jy6mwЕ[A3)Z. 4=Ry >zdUAML|0Ӫ~|t^ ADXohK?i&Uhoxt xHEaTIH[mKj6gv \9 'ЯOTuyW6 vo؜|X 5y2L6`L>(`k ,B;ĬWtu |sVk/;^sXeԍmsiS9Ã48OͲZ2S8+`d9?)v<e df_~ b %!sj*X(߆AWAxt<=r꛹my "us˭ݬSAėf ]b"յak-Y০kͅ}}{ 7Oߠoc/2<1xᮦ_FA Y\8~d=*NY.$y;ǸJWKŀ ~eKƼ_ y^ZBFfRFb根GJ;3Eݞ:i؃/m[`@SR<.bTKJr9F!&ri%,wU ҤXy(q ʠ6wBKٳx1H2%%H@IL z`"S $I-# dN\AfxƤ/6~>Pz dԙQ畨s2g3Ҍ8A|TB߈gk̆W˓׿lH'D(,BBh(R`D$| t^Ys("|P<%T8 yQ)1h(.DpCLEs(aH:w:kO I7vNJsí(ZyHt'd~)!\} K7L8f2`^*ܮTɶRց*֭Q6?gsVɲ999_"aHnRt9993kgs6>gs6>?*Ƒ,zX>e{t>#xI6¾W;5@Φ߳ټf-`NVKys(ZSR:;^!qWȍ du%C!^rñ@\SB~&-_DcN=yuоZv!0k=Mxkn6[ X jJhQA sAL&]a0Q0!SB0jveȗzqy?rsvVCxsVMtn9n.lc[q0dJy(PuCh +*4zŧ[Wdv83-;uuk<Ǩ&A'iNf8 w0ASAZ)y@J7IU;h (&yFǤ"503,*Z$8"`O WrhM&ڥD^,R`̬zGOeda4M% ā^N騰(SLB~J4]Cā2xIͤZY N2E֡ k;B0r_FgTlQWk,xRw gQxcuLP:a$&4mٲλ,lg6,, _\.ܑolCJJV$p,i3iXIFrV{BJBpZhMU( x\cV0S_|gвb<7 5SM9޽ PGWר$ Ӷk3ې6.NbN2%~-Q x4dZMVð,fɖzĕQ9=jn\v,l>2')H0FSՏu ˖𿯁r/G`B&Jʀ$R4 j@ӂD 7|3isNYקi[mXjs=yj˹a~;gA`9i@FkoyV@pkV](X'{1?Way<#urJMlD }ѼWt&mFO'Smќ{\\$l`\EZ{_{}:v:9:Ʊ x n"^D1#-rh*٘I4<+*3veLg/ϡቚWZl59)W\ MtH fK+1&y%!{Qq@o=ܪ- *Ò!.9Z[AsqjQJemڕd UCI*N/Tj4vCP 1LX+=!5j5d*Ϟ;yf*d~ "bES1p %#2j w҂R`.U<.B)OSFs4 . <^A',jakl@Sőp.U2D㒕LgUKQ2 D4V,[Sy=0>w0F0\/06W.mƥJXS"TGq|${L1Z.*Y}{NCkh07Cm)Z4ĘYՇSK1su1W^ DXA/1 _h*LBXP&R9▱,{OLj9+&eF tl^_`~B]__hO3rSI͍kHĺyRlS}lS9Tv"٦rm5@!@ېcs4_! 'N`1z+uRA5k,-PF=F:*d6?MƏIɸ*-ƳRHU؀`$2[m)XYY.(>޼OxߣY!/NwhT4kUY[/RPϲ^'+߹[d` Ƙcr1qF{A2@JF/tJÃ&_b!SӀȏr+cW3{Di݈QB!럃; 3 KjP% S oAs@:k^N5<%T8 yɞ)1h(.DpCLDq,,xGI4 J f G<9WI7ve5ON5iEQ=7_E>}7_ kd3W_ª'İ)6 "/u.j o)eX?bݪOQ[s8N>s8N>s8)9|up9|'y-䟀r9ïK4_s~)B5ɱ!3>!s<!sqœ<!s<!sEc>s<1,CgOG٠aRp,הP0־{F=# j˫8 "Vr E|[i5 d4ꨍ 湠[&Jy6-?v5}Jk{] 'i> BFΧQI9PM Jk=OLsB5IExZT2QAZ)yԑ @j1w<fE0i%g#@'xwS~l0LS`aά: TFF6JkH:@ 2`ˉ0$Q#sDaL!!10e4pg4@ﵲDd C ;j vT`83Ro9Fr^)BI!RIܞEM:8Ʉ3mF{i"LcۖhyNt7HΘ _{'s*YY*› XV88ӎv"??+)(vq6_R_+ OiBFj=kZf ZV G-Opfx{)gһ7yavmf8f:6EmH]L%*!p[=O|c!d;L֗ð,f7`@s`.+s5sz(-@ܸzY|ZVC$H0FSi;?J}l~Z͒Sv`cɻP{06 i4gT>u"]~"k忥)}ڭEV)GEַk*#^f4S-3{65+Q,':?khqɒ۽ Ϲ$-!.'Eg&Ͽ R:xqf˒l¼騫%s-ܝ컛X4af =\ą5&1,! #-]0RE6с=Y4C;M=ieaY0r>藑8<_-QםZZx-^Kx-H 4"|ə~o.K87]]n!g**#Eb{KE4.ihhkd%АҤ :ʞkv77vpfC?܂Ax5H4Ѝ=>z{kCNz7o v,IB")SJgo{'>-2g#xi-r~soMD+6aTջ-Xb =$J<|^k2ʣRyT*JQ<*GTE(AJQ<*GTʣKW*S<*GJQ<*GTʣRyT*JQ<*GhRyT*JQ<*Gx<@c; w3}ED4(|R Bh5jD@gˏ,;UD$uP(14r26r%&P.vJFdeNqd:B"')%kpjd1i-w:kݘlf\g@+ٙ/y3"F-uJij `DN:ZJR._g USGS2gDJ{8\N%tEe b*`{{ UOCb-evVDemĶȬ&;G Oe8Ё1[^j  ~I VJՌ.|g  y%(bT Vp/#sVAם&Dpgiwu+g= ^p/wQ߁ 67X[ 7}&TM~wp$)K)oCݠ R*;5s7Eaiƶyѭ3/qY2naUa-=Uif4jnQXw͆wר|(wSMfE?\[ ^۸5|wANz_fY\oV7-f^iاˆ/6bj='Rw~nFԮ8k^juQW$V4w׃ZB=G?uӮc7+tiymԖw:-bTCy`kHwOf()u)Lw||OFFΗ]8<^{2ALQʠd!clќ C%Xc^FL,D{8رauѓUčRIE V8A#RhpeTAwAԨ=fvp"}pyx_G Mߐ6-_.Zt:ߊ&^NU~)=Cy`I|Ÿb;2)RX75X=5 kytϋM[^/ AtׯQ֜)Ů& ~^ 7c+ZB.âd6:>B|BZU}hHʠ\3@)$:k߃pp 8ԖWQfnǕJYPpo<@3ot3:-O*6zS z*)}w_muCӺz*i$䤥>B1xfH~1j'&㍁\7>IMa<.=8P'W֭ sQd"܉RA8 =9j2!YΘm)$Ȱډ*O"WA!$L4 FY=󁥠C U Mɡ^[Eڥw>k筋(h`rGutlҎ^N(]cieL%LfGomP^NqT,"zE`Rլ*rʽxP?tӥǗm=+$/W  #W[Ee[ϮOSb ӊ]F/ODuK$?PX*ok'Z'nai8d ~-^!aջMü ?ߞ@{"3V /s[t ?k9r?Dhh%؏''}k$Agu0$SBң6I,{ @c\2qtɎ>-%ȍFFcΏRgV9 X.1L`Ʀ͵B5 Y4OeH#OER#{_$ҵ[E{G;g-ɭR<,ٻ 85͔@4pK"9̞|MmYߟ':?khqɒۋ~r0L\h~"Q$F_nk(íi8_$&o4E^$fkMg!mFe$t+k39L# $f%2a \ &:pRW6fhiC48UY?}:M 'wC^$j+jѢFFww(.ߍ:p~:ksޱ>>Q6nyD=_{O@I# cb64*Cjy# W,!'Y-P׳{-N0.a1Kű@cH ND[#evJqnQ&smo&+ޑB9Jgm2O#c(hCO {P^9m$W/VW͹m5ln o_2|9d4ao?؁3nrpʦxbٓϝ28& i-6,{#s1Bv2 S Xۙ6:/O[cT*J/Yp/ZF!BNl%CS=sj 2NU mM :*B6r:dezYsԳs~-JZzT (1h"a. <*֠2qY,Z*Х |9${tb~b/ɥ, )< s-UHp$SA2P3j'U;iū}gYI!%AE+k"ڒ \c@TptpYVg2Bβ6Ce6|ITFYːI\b«4\y!'0縷Mն="ԶQhahG|Ȯ';VyPDx"F` oVq)s!U]N5b#Fvj˫aI꠸!AP'^ U)"AY* !"B0Om8Nȧی/IνDE{ | b/@G _ m7#ێjR⸟kzS>ϟ[.+9k8Hw=OAbyzFk$E=/\{C!ĺݙ쇡6~Q -/ 7G>B]v 7F̷T?[썆J/wI-v>8=2%JiOBUsqA&#w,X筢d;oj.c}6$㳖F9#ȗr0Id .ɔIK% &+:{Xkz㾁kwRI7.veVwmo_:hP؛Q^ېW7UdMI ()ogT/pHK݂GWl+\1"tJ3pW2QRy>''Zq3D Lzj|IQHiF?:7tOk2JOZw(sFI^e'W#6@4ʽ^TۀƎ0v/g#nKO^naJG.v?͗4vnX$KANoe_p#\ed>6Ϳr?$.wF.uBĄ0MZ7=g;p-,8_BcY;U:j4'O'Ԯ"wlshBIO;ߑjl?.h2b&LO\-Ճ&7MDZAٲfj7nM!c9ep  k䭐s?r?iתmzgW qu}4X 'GN *s  G\rAlu84a*2iBtWUa47܊NןeZӀ~2!"O}D^PD̛n^7֣+Q}d#tN<?B|5ԚNbޭrZ~FwO`^%6#EUzk4qX⼛Wn^?إ9))fi&u'DH^Aʕ@|%khuq sMI}o6j#288\{f$C:'>B`s 8bZnզZj ;q7Vjpc5X nets hQ{5U?rIF~;Wq֮br@ÔdԊ-NF+@AnZhA~>.e~-ild?Wef f_Ηڲi6soPܪzjw ^i>uL(gU;8ɛ}F!Wm=ib_gcR]*}c}?nj W-IZuﺋiїӼfY4癴wwe|P hqcej}-%Lo-wƇ e۞sr>~Mr6c"%wDAQ{s~tj4"`s9FE1+K ͽ[tvX/=kaU}dYp,@KAȨBFuL& &E^^?}wf{>#j7<">ݝe ,JhA37ڭZt_5s35s/1fxnwJ7O~a@2I4-+U:=ICZhkKTIHctf!fJ\ ؗ:ē:MJMhɐl h:icB n/jD<X C sJ/}'mH MwQBuLXPQ%FētAss!T zJa+0@>Ђ&>I&< άLԴ\q- ^IJx$0i5 V.{r{Ђx-dk;$+L&-eA']x{oe^eA傊ISGmbzS,CZoѵ>̠8΄(',2Ԋ@EA7;B /J,[_klHd!fżրLiOnOCcuUSK ))I%ukZNJY!-4 pc[y .;Z-#4('MF}hu5! I dD P'%u{Z/Wb񫇺Ette>x߾쵱AB耤JQ5F(Izzzx键.*;O-zYzHkLySJN:VJόJPȒ S$He ?tgp<0tx^Sp^p`x*Hn MYQŋD>%XXh!l,{t^gW{xy u[^H317 I- '%61U"91k132(4yG>#HѪ2f%Y5˃f.#Q +=H# UTAFZC9[Lz-Hb;rЂxuz5ʪֶT xؐro }Fh@\.ocd"d2L)2_\aPKgf*] y{'Tړ@h23'+M}FhA# ޘ(,bZRյ*!Fk~Z_[>ot_HXj'z$0IQjG87/} ~\~9!jXB9jP)g`z2&% $Q WLJ٪B*Q *P+mh bi{F26c _x hkYp3cHvgF:Gl1̋G?Ũ>lt2T OL`N ;P־da\['=SE;*/qZ׃ Z!1e$0̵xYHTF*~=FhA&ýiä K R%}FhAӺo6Ysv802#%3U]># ށkYyΣg2 p]vcNf|kUW5rdu%h rT;<ЂxmI^;9+8ikr1A5\7n&"StQD*#kkLB ʄHI@&f)ώ7:#8~oggs'.yp2UZz1PK ~y@7י(!Q:/J뻂 8km,"Cц"'\gK4}v?સD9'F{p拿p ̷Y@@@O:nķn/񇳳(݆t~ܽc3}$?~v?ڟ_{s&L!ȼ2!`Y :~͛I[ݷwЕIz;\HwMNr]?$ϵЪR٢6R!,} G+B]'"a}vXOׅ z1ZFޒl J !w<|+ >賐KV2bF$4^LӜP/!xZH!j+Oo%Ec)<߲SϒHd)cN$}H[Du5[sB~KP%QxpLaPvYjR!5` [R[RR$D6Bkl_ X=*Ks~jkI|⏸^>Ѷ>k ^pSJL\p" SC !`L'< ״VzT&[i ,(4uȵFT'.@\#- n ȽwD$㙐RY~;\̒ izOU=@2!;Tpe~-W|U廌#mHz F5l&YMZO we 02t)dqJVJ4xۄr w9Vxvp͠lfV \;rd[Xezn{hx 7"RxP/锖xK M#gT-[;#['-R]2 b`(yb(dΤՀ^һ( Mw{`}R!8?7Cp~^!8?i@Cm q%[q7`΁գKtK =y<:jCO =yDŽ,J#qMdT/@tg#}RN dm6Uvbz-^X\̫oMmx oXS$& z, [@ź:#kLIU%m=}Mz$* N>Q{z8tt*_5.YJmo#ܞ@[ڶZ=!uMy;ǼqyoF<)l>>V'Ie]Hqk⬎9Ziu?W' `oF~F%ltpȄ,Q-?(~HF %ig%9?^HPTty[7?GZrp1+0bZ]rO5aF@si\ۂMuxB |gcno5i(3w:(yt 7KZq嚕~Dv'#w rTEwGt,{rr//&=f뤟^%Msάo `%\_Sesz%b:xcOn1])ωek,zA0#Q9j2/0y%͏gz_nqldmGے`tHngɴhͩ9sd5&s-!' FZh]1Rz P)96nh=An>,=Ģ8 Υr ZdBFuLM8ߓ}f~EԌ'V,"?mOpޥr?=C}>:5`a `` fRd-=3*# ȳI =^H퇉Xly)s"$V{bC%K$ݖR _&((] ^%$ 1DH%ͼ{0&K $hJxfA*/FXݨ84S FApD F \D_sù\ =&g$g^kM^$'LU)6Wp0Vq9&V@1)o#094 i)=-Ms6ipѳ+ s:bVQ(]8ibYłg<ɖ

Tt qB= S"3("_mXXZ٠T!M0!*Jl\FKFlTD]aRJvUʹ:ÑpaY68òl֤PIGUsQ\<9Z36x-Z##vֈm#"qn)"(emdVEHl%dYr ;H2#BX# Ry vFk6.x_#z,ۯǢ$d[ x4M2G@=r (5Hi'󄄒lrda\s7,/q |)%qtp(!L\KJJNSR*~QVsy))n)_)M C,c3&Xaˈ騛+.Iu!oA9NΡ "V:CP&`pK 3J#n, 2lٌS~~=@&4$s CȠ ɤ&F˔l|`C)ǣ) a 9ƨ΄f!R- $#ZU0 B)+e u Ýj@$zD6]2H,ReGx%"h'!b.ms[ձVx:1#iFUyɬ2.XJ@-x*=T[L" EǎHFަBtV7VŕDMO]bA/_Wûo]?vS@z3Ə狏;̘im3hQ"c ץϝ7qm N?vǩIݤןV0zaV> ?UJ  wYW]"m40b#FFc>2܏n5J=|_K M546402ڂc?>3kNE#wZPeRg=NWˌbdR7z=kIIw4F[2;t6*c;b_dr-'F|Ҳ$z`{E9Q؇p 7?vZ[Jnր]=Gfж&Lg3o X}hQlFlbw[\ ] :=;ZuiK(ris!k c@ t@ FkyľOn;BN!њH8it(RJ-q&VN1XiJ{8jwݗEh3VŻ 2G( R=aO&!lf k pap+ J}Am3NWv^vki4"!Hc^*qDK9Q-ց?(僡 b`'^2@­sTrHPKLPF03F`cS,&| t6'jF8WY]gaWÅT/"qMxsMd,ԓ_ߴ%'˅6dkMg!@ꬢr\9-DIYQ8PtQ,;Co砻{5[MŲNW y*dr69OPOg-́8Gq')r1BT+v$#RQʑ~*H:D*QEn-FJPy EĂͶ !- lb6.nO;/ q!QJHmPjfUT-L@ERɉ0эKhQm|?D)WMQ%W+}o8z.|F8{JMv~};X8F{UD ",SˏZ?m;(/SEIq/gr7% lHdWBXR rBy="A6cd?D2q#RYgTHMI$E)8ZJ+1.>6^͗Nz - T\Kuv?~)k*wh1ژ*mumEՇaɨ2$ɉֹo's*p%`S0XpH7nJ?H&{YսUO6_|wlm}`IfʨeVg_Pˏa@ UKeUnڨJڴNͧLjL\Rٍ8Fz]p:曆}X27},lsqGFٻWA))aE*9wN2(X~48-PĎ7yjFW3dqf,T8 3=S t]b('LǓHT;/ /=S?L嗟 ?ʯ8nHgn$L_{2Trv?ɤK~mH w={y>=b2Z]0*Cg֛Ü tdVu˽d1ĩpC=ťM,74^zOc&Y?hyZ{rzBb!srյh; <dIkb6t^ K s2\b\ R|P}[{'mS/z'ǃ弪g+j޵Vk8OE{YOge֣C2!ďUey["xEFVυ?L74kţٹIÁLoy{ъek8Oa MO7`-g=M{}h'K AV:3x ^:?d ^:M)Kt/Kgk3x p*"cV,`z+ SRJ!K JAT+[i Ek$<8OiO1>8|v2)E4;+3wlFVL72qjݯuR=ͺoR;} } +fYcAGBG2GGI1'WijXJm0!kxR*< / ͱS(KK{Di``$E<)eC RRJ1"ł H.E~w,,h'^wr^\4GvC_@(hNf|`Vo)6Areϛ{H/|_fǬ#M2ӮL >/]ݬwu LMDDn5xsv["+{韛.(e}Aem& hwDc>%9ܸ] M54`Ɨ3GX}iGOUGOq -ov&pGxۍӣvUXʍdhB 3 ̰%4Ҝ럓P,e \O63XɩE۫PWGߍ ]c v9n#s8̭)30+>#`͔))lV*YJ)&DG)fj ԕ>Nt{7xEg'`S^hO{nKl9nX鬼7 חίy%!9(yMnTzxX'd&쉷2GH:=KE2̧U:촿Z-]\m8OlEہsqZ.q@#:O4ڼJ_VRQ/%b)dܣ9IHM w0~}uw\47nJ?H_+<)%Կk,yD);;1+sm?Ŀ$vh\Nd^0ܼ̄ 5=9>s,{v ۢ:%wt]7r*PZG(XhZ%4m[EFP0mЎcO֙;4T)V9#$nTrF|ؚvGgdIX䕰X7f}'CntJn?8%'SV(P)Ŕ f`p<8h3Ћ (QZzv|~ݸDQJIXaj_`y'cDXk(VYvY$A;kO1g1$VȤ-3Visbl޷'_wtLѺɵY1ɰ}4tiCJЗG_5ɟ%t|tj_4ـYьb߻X͜aB*Ğ1v_^Q8J951:J5 S1V9tZ &~h_1t }Pk u<͠N}Uh_x陦0ĺI.f#; S_m^T/iLrGd7{\6R2uڃ̵2M&y<5{D֐sTm=XVw?:'ʓE!mUwvIyȁ|P[0f+q'U[i|,:{Sr,ڼ"S^G,=1&"SK{(zmsC-V,ѥ7l?vi.OW?22KXPmkj.;7Em;kS$ =|EDr%4zFd#K?WU7Aj6aޏzxhg<@3D(%E26PETF;%h2y-3線]تvv=[5ZQ ]]Qx4:2#"@E al ٻ6$W{H~0; ج`S&L ,߯zfH )Rc1` QU]U]VZ` ńtGek>°O0w\khlo,S9ї|}`~_/ خ cR(H cK~gy;\iK63(fQxh$Xze+$d#!(H@_Gx}ݫr*y-U1ͨv)G9WW tYޱ?>OnP^>|ׂk ,rIGo^񀞑xhg{:68򅨈`>hRuFa\P- %pKv}! q_n!#q \GJi}dI*ӣǛN|[Z4p,}p1*)tL &J0f8=FPwSp Fi5G1tސu wtr/lHw=htL*"Q* J)j iH&F>K̓AUqw6b]Zۈ?x*#`(-6DbBl9qpD/ AtTXHTҋY/:YH !@  ZY NY.8w hq]HFf.K'""(ŝ۳(:& tZ:a$&c _|5yaGB>h fgX_&U;OEt^gEƼ\ҕ%A[mVRhvu28/և.4)o)=UYx+j?}\cV:3#} _ad;KiN4L_|^"^cԨ$ ƶqySs}4t7xlU] l^rI.QKޗk0eSP4vO]}J/ |e,cg B0FaMŷY +[$wpf37~e)9bl) ֯'?A\tUA{ C"r6AgWRP;3E>׳b0yl({GtP& -,IWB܇Z>}"j'X\.$osLLrx$zbʣ|oMxPgâ)yZS<5о[V E`k*@tJxLיԫ/?j͑GM|{^cXVrJqRN]Q[ܔ*EC0a 8`vAy܍c2K6[#.HśaWl9kEq, |',C Dﲌ_gzՃ'^s **.(Q̛vnmQ>7%-QZh+kof4YE@IXqsƄ-*72ܢE6BCiymhh)q% 2;4N 2+ 47>^U M}n:7R=4v0sm^'Ju=jXdkhnOq[Mr&ۧlŶ }48>N<)tlIID|~=>V>ahDX2j+l$crIsBw4Z>SV?ih~dYgJؽO/FL\}(wE'Žs߅ GE2gɽ'`TwwETo$9\Tv\)Бqlr_``#LJh`4'V_p4Xxjܳ\qXtW;L{Qk?>ô,ՀiJ򐲔"hYi0o5S=ypj9^;U4Jj^Jh7exVE[_O~,xVթYETz:* {wuDʩL +<$boȾNX߇ a:lorgej8NJ\p78-:2C6&݌pfĕ<2Z@F5w΄F'B_fw{ q7-M| Z))m\X)L)q!~H0&k(sqPY+,1[(2!#=1~4e.eVqhGYU1]Z{3} No,ပ0Fn[B\(%2gޅ\-s$HX{ŝr^6aj%2S2F:x$#-D4#FJ-X*tBI }  ( e5BINZ,3BR3m缶5s^2ہ?e PNq`ˣ|}CP D@g0PgpdHR\Vf<7l1("J\r[[? ٹB{<809A=LiH"3`5K# DHD`D)59bMF M[^g guMap jz&`@j8lH΄Ny1'Zb; J$"p%5sib*[hs <'QoO?'`B),r+eAHNՠnHI8dn4iGVk {;V+6P "V X^ t#eZk v Z?fosL;X~d5lߋ𪁗[߀nmfrcp[CyKҳ6?LtڶQ]E#ѕ.ÌfvuVx;p&UwGHp[HNZL[.使]ͮ2q;˶υqW&0шn zoSw`3a0|ߧ>L"| [Do ۼ|[ i7bH%L^¸K>_T_JZ._6 yۣCY'S|\(wp)BSϘ8P=tHbaJ7rlS \!]_?/З6d EX~ HC:,nMPKe]EC7?>Ʒ xi쿯A(qk0Bp?ABn &ݦi7B{*Ql>BǻAǡ8FחY=nfz/=?hH ЍB lX9KO!2TD;qs>u ߗT۵K"0&e5}@>Hݙmx]bI+om2ۖ_ fX~<f뫪7~ݻWXYCr1WZ\(LC_~j}M% Xs$ښȈhAN3n D X#„WBK:}ZlC3Z%c2Τ2zdZsA+,bTZ±H:˪h2 Sg(kMn{ʼngMr4$}"0"ka2Rxĥ2X h9hYpQpzzyzzy7mCC+BT񜰭ƚjLS+V?^B"u?~2|q<+b|oڕU5. n=]_*z |C,-@k,?hoY}u;ݵ>P9} * !6x敦7\ \J#z姣2b_yE, 9B5GYprZ< e2tn3pf }A{yg~Ve=o /roPbIv#9G|01#\qoFяo3Q dҮxkZ/y)\;S 2g+qَ(L^TJc޳'Ws01[ɳ>:EJ8.΅6G1v]{ߖ鯿{Hunge~?ټ7#2L|]LxO BP΄MhU)Ms@.x# 1$DҤL1x" Oc±IV~:t5H5I` F"Mٲ8j4`( бliuvJM{v(=]}- GSA]u/P+u<7_pղ~f]+4TH>|R e4ʺ9 \q+pqdm=L79;kfh%Fl<(f IJ)\ʡpe0ރCZ)6:d.*֣嘒1LLyfPIiCw 爵,MAߘ`/h-jWF2egZ߲lk.O{͗(F)^vr4TcT\ST4 +2NoBi3V AR7S>i4sjzk(3x!dR1qi5aD-ز6#(O!H)) 2QJPR;#l9`ZFR6 2Prǭ 쀦ؤoH(VShB]qWk$S!1'N{õhLÇ 7:;GOeA;oAQXjf%ƓFem1&#-n7RH81kT0ԆIOMB#tccWΡk?u8lB[%,'ԅTyTx %TcBU)WW!2Yd\J42iޛ[2^|vӁLS9&S %'q2fi}@]om,Q>lT҃&y&5IᏑAM k;Lz\+oTf| Ƿ ?/Aء#Ÿ5CGqnFIvZ-)*D Xͧa F-iU&SS`Zb,G';1,d)bk2'-{ػ{24U99ɖ"'탤?j#txQԆ}R R!WNΣTٕaTROA.txM@(@Kg2LG֦@ ۍ7,Tu,B䬬q hF)x4 xv.]7ќ82V[ 'd;z*XpsKkD^&!CtR+e"\1AFt:gE4h()JMqcZO%fdMz:KqD׺[Ut ,G/-&;wPLPOKىαș<બ -lWy4Jh$_l_0\W9ݨSɼI5^ Y>Kpijtw^:ɇAlz3x5q ̮h/eQZ_ZW9tfp?ͺCjYQUxpe o7ZRRN=#G~xtE8kq{R*EsuǦnh ,w6>yK>ƻW2~ފOY灟7LӉ=ц `;S{3o u+Sޖ'-F<=w091Glw=9l/^/+= g)c9:s|%E?ҴRQ89OQ*d21YI'A1i?% !E;-\ci&e2հEMtecqFr[TkD:ŕO&zT 1PNAHCHK 8gQ;B&VqQ[ѲE]6#Y*K\YZ%d$59)E+%-\A w&i̯. `+[Xb$r@mK#@2"K^E4/bpZHpoҨh0]h+1,sQ"3<8n9 xٞ"٤fgnr~:i+t"!C$P$%h5FDh@yKAIY/Sv)Uwu2`&HC9i4ԧFP艫MEݢ~kSdJ|8+~D/~[S SEwP粽Y贻)P m+܄D$?n fT_~UHB^),z~QeCܴ~;Xoiќќ %#8O։ifX1RMx.`K̋wRͽ[B;U=^ktKd3:Y2z-@﬍^OBe> [c}"Ľ#E[>7#zD#mοFe kbK e Vq ` `/ C00FJ:`V,X+D&UZF]a$gGxA,)`Yp[Y-Ӂ')Nx,崍";DcE{9uPF5R(I U$!x˘Z DNiHkε5p̡䌴ٓ}dyᕿ¡7䄁Megx*p #:sAv^\5Gbى {½Zink';N`W9x!(Q!y!9IUd+iQp2?{۸_. ~? l.@n@͇֎,)g}IJelQ~ ꪮe[3@%hV, RjDs$rV!O6Pd4mYZΞrvHFM$M3 cNMvHEJj&%DC^N7='Lq"ǹXPQ 憔# e$㝩՚fOs}Y %:0FȬ2;vZ{cmqqhC*e`_bV kBl@V{ (!N ]%h="٤,;0ϲnFx}IV̛3 &śZ 砘Moz8'gQeƫ`XAgy;3gF jOZi$(I:]ΩޔC8 n6Pu5"00m,Tdq:_pWq'x}ߝ&L"&0rR2d"oh:`3gU8H>dR t/%5YvwL?{=˝<vHXx> ~7t}^ 0oe{wgvahW@[eM؏U/ 3 ?0#KQD%$c̱|6 =ݝy hЇ8pK@* ,a%UmɊ>1zs-Rw%TJeg`4jdޮ}U髿)_Xf~}ëxV6jXlR^{fWQQ>< {q檎xL P4?P7@P *|:l'7'`ns# $g\Krׄ~ ;פC )6gF\䵘aLvDo =IC`t> fըjvge]g7#M*=yPC!fH HeZ{O15X@US6%?T@Y^]ܟ/BC;&Mr&"KV'%  .fpX\MK_/цl4ua~dh{ds{ f-'~X1NUf_|jvD/V⮲BBx $O`im0G^4팮at J\+tfUgV4vC~vTA 6Τz #-jmĻdO Ji "KDDe:rˏM?ԵDhA%AӟIV\<b$n!3v~kv.QInYv~kv~k'u~kvu~k'~kv].d&9-t9{ z\N_'|._ጥTt˛b(HFegDa[^a'ߜ|3 кLቍT{* ܟTDŰV e$$ra\fY=fD 󏧽dԋ A4gX!0EN93k"oC}ɬTcQbe%5* m#%GT9yPi[u!5DRD }łCs e;`JGn4V^tBS 8AȤb,[y@!b$ pV{' ::A$$3.`5RC|MFf9 59$I1(Ƚjtd!8E0Ri;SҠ07H0><Ι-rL?:tUHN%?ǵNuELLs AHRVa j2LwZ D佖)MFS)ަ_*2E=}zSZmHV RvIdJ-q1_ Hn]ߣjb.7);+>Ujwt]kOfe6<_cAnZ s%$;n{ש׏8a?ZкjWw;:]ٝGe2yͭwwz^6y ޣFno~u><~9PKPrEjޒ`MPimL-ƓiB?cR؇?!}h`hJoPFa0`N]f#fH"52SgU8W8m=A3X]zZ13,ˇg`d'۠tњ #.ib+}OIXPj-]o_z?jf,N2Hf'`>`x֝n V',P'wxbj6NRu/ڶ<8-i6$uhnUQ=]n%tKZmq*e׿5 (ZyLeݱ̈́) /´wcK ٠a.g{0sn8kJ(*/6_yYE&; *SHI\ +EQGm60U2Pʃn9a ,#M)@8FҎY | \KPnKרiC[3r>b چeգsZJʁkbTZ빋J0 G$5|4Np2u"wë"}9B5x,{O1dNF03.{Ky`42ȨFil!K 2`ˉStLG\|:~) IؿK5#H !@  ZY N2E֡ k;B0r RbWXB)BI r', o, +i0Ka"?&bfX_IK_BsyR0`T(W&+0<oM>a[IId6. |}_+]>!$P >aK짢z+ Ƭ|7ce夊%@$l O/9uX75* t1Xd\ƽJϲ2>ðhVjJ^ IβQ %yeO ` 4}x5cj6bb\eW?YY+1]⮢1>&g~`M}|V_7[V~]O^?A;[e+Rr ܽvNbNMQK;9-f.OPo%P0 40`M@ZX.*"H/!CJ>Fl?M*DU!&>%RU-u|3d~+φE=Z ^}S-W%i>d\.RוvuCvV4`ل< \_8j8jpNR-WgX/ =y1Zr%^~ E_5#bۃ ()HiHP$s9-HpC IՍ0vcEsk'/=)RŗI _v:fJ Easô35Mʔ,۽1}Ň(ER$(2a*Vᑉ2Oo N\&<ͬL`\<ȜESE'L_L&3.1g]tN0XK$etky-؊V;LU:vUɥO}&sNǫfe&P bhTqq'HFbP<) ]O۞kGB>Q:!ƺ[wdR'zV9l=$} O&]#QNxyؠu G5g}٥ӴžiMG+,!}q)GĮd+:::R4:xZ8b0Jlji7eYZ_KB= њ=S^̵,DduTrC~2ΥHU$AUI !=#:j'ƿL+MxzyþOr)׫[&p4-.6,!!;lrO4⅋y|/tGOpus }qeA{Y?,>׋?'w$̷WwMU`0A>|VQ\TRVJ[c@p0$U݁g){3 r TuA3c\PYNp"QƼ!0D2PӐЭ+<=rs6qno$Qٚ]W?~; ;]{!'㑋B#AL^'?r<>6~ g,pěԴ9'O0I&~:<狣+5Svooǣo"L0B&Bm`*| qQK, 0Π-z~ĮwgF~]=[G8ڼ29@TEC2RDoMrAj͘[ Ėk̙ x4O}ykg+o89ck yuʎSBwʾ  _:JafA{|.pm- -ĤBY˫/ꕚNb+%fA~wqݙ[TGInh73n|!']q݌T{۹ KBt.]3~ z@PdpZ1jfjfѝLN~s0s%> u}چ'u w3qa`~\r㊮dR t !9G@ZY5 5)\QdC:@JUR*MsU|+jז5oQO.vgi,{Onh`0:yz'pI` 3@⊼.pJ8%rRU 00Q|Lk(,uV iXJ[@R2PQB$^@?YX #yD:Vu JjSCj&pSqыvrpbۿ짩BR ZFj+]״jj^0<Џ,_C\c@Q( 븖ndܻ!<ޱx4Y/n7F(ixBYpPn ?ݍ"4D]eEd—Pgı[#~h:f߫扴^ϣÙ*/HB$٩n%3y柚Ye/4*&h;FfS/C&rTf*>ykDZU kUZV%U kUMёUO$ыP?D/'5I$A$ $&):FHv <ƣ>YܒX4`RGnN VsVA`0E% X1r'%DY#Q:x9}֑Swl嫯TFT"ZTUʆJDV*Y]DdXձJDV5jZմJDV*YM(l)B'[tŸc`=Z6y*i+Їom q@b!s'ӓKrߩwjK1?gHjaL$0=aym n%rM}dI2 J<\g. cpm0JRH-SPL0Z:kTsK|_X`O(sD%7{^_ZflB1Od:DVA#M ٛ rg(^`\erK`Rc+bY܁т\F"F#FtA!A֕b29r:.dRWR[dsjid `_ A;[B,k|& Xyl ?". tM2PmZ^u}ccKCJL!D#C@-Ā Qh:vB:+>qE~еC[9,&TÁG *M<; voSVڝrMHl͙mFUV [R)Fx8J'Яs8|s0>՗Y$r*}]_?f>RӗAx(ۀ {/̯hB2/qUl~27Þa1_-v/r嶜}/ܟι ǁu&s>_ER1!Ԉc ޲v{Q*zk"KqEY:RaUUaՒUpT/[XTGT9pɤ\}R1Iqn(hEuVQ4 JKt{|UeR&"PB\*.LByfxǍɉ 1X([u?qC:&hȱ^PK4g&QM%+鵋5g a> x۲Ta؁XGuyg7zw Z/P,ikDJDoDIL:z H+G&WT8EdQ)z{ {/v{,޲as.ܡ~Yos$pv0f"藫6 ї9đmTBKs郕Z) *pt;tBDN 1Q!TIt@js{$|A'9L l;jɿ:4sN|r-b-,|zGziJEǥykpо uS+ɯߟ~rk@ ȖҖl:z@޵#"]`:fŗafno`w ttQ$$'_X[-'mfIv=~U,V)fuƼyș8qσ2u›(k.Nzන\(|~0î]ոjV*rBy AԄȳP XQgγTI++9}8iǤr6uO, Qq3=>Jj,υB u$Qw9(|Hf uao>}CCjοnwwO9tfzWZ6rnw-=5ZP moyuax7Chsu665NgzAi.lz-?o;33o*wu ZnObOxm m~=Z'#QoяnVo2afk/BK=Ͻ0{RH.i2OY;|b?}11f51d5`@3嚒0 7/Z8~&Z__FW7_.rp/z29ן{s wbe<=vt=Y KOz #_>L?qoxO^CLrk{@GkijB=W P̓`\äA4|ҀI:7g=8fa%хA3]]ĻАHm/MP.$'7%=۫@-˅mpRK"*(3,\Q[\eVq<9#T`ƹlUg_ϠwǑNä?-v~?/pXm=&vlZ=|U{8GYet|s|s;db?OSa1ݖZb=xHǗ-8c̗c ˺-|uŒ133t0O|_1m!CW STA=Jֆ} YI֐q.,":)wNRnIEn8]<67.kZTcaDO 2}%/"=s2Y_8pF+/x4ԅJkʺ :Sznyns9P5vNǼ'|_@B؟Ԅ^pR42rB'n.ɀZԩۜ&lDJκB!5f#ln{%}yڸY=Wfyч>1[r=Z6:#LB9euwe4LK/&?B}|hHd#Ť\PSÙ3J*%i4Ɠ`$1Y^{Sy6[ TD*2&> =AHCNK 8g- ol@3^TL\ZM-g%= ؕ4="+.rװZ&&'%QDh%!Pd`AhĝIk-5foKV ؞ӓD&pAgmfcˈ.+&! hTc%("ܫ4LgZřg(tF { tOIlh37B9ޑ9|QsC 2$",A6:&F3X *- < LJz #eͪJ%cd#f>rpBhd.k|ؽǥ<,< oAWX 6-npZ\4 r3_ mRHϛj׺܎gҳ*P}IHÿGM}w=7͹08X$4.' @xhjBsI{.}kemeǿohU?s]zX\rf6I"S^ۼ J"cҟ/ig?+YI>(" Q\O]|]Ӌ!vD\b z0%kou S(93h%k mIMKDr̽vߎ+޺4}۷o6kb!Ɏ,X.1ԁiQC{i8kԻe"ϑ { 6&] % 0 `@0++L-n0Գd4$KJ.` `G9og f1 [LNdDRX1-DvƊ$=`9uPdF3R(I U$!x˘Z DNiHՆszNH˞K?ȷkV NVUn -JQ} "8ȅ\#UE\2;St!02OWYK:έƹ;rr0#b^p 29%"*$+6b2' m%2 nC#n֜Fb!'0Xj\iIXn֖  iR~I0gEe>6v8OB-I*-Ab&Dm0QG\{]"@"1$5He9hl5Y9ѿ:9_;+YwY(IiQXZs4dum.$ԪlI6?2p APpomR,/C0gcnj>cZkAl6Fb;QEpfY5J6˶Cu|DY\21'0hчɜo*zDhl%̢==F"Oe>QVQ#D+A)Y HTPwz>dI&9_ŜRcV"qE- y*w`eo z=h֦&Md+@iD ;Y$t&/l4X,ґPw7y; FޟOe/}*=bd{o.7u;Nzݶ+H6&]U %z/$QdBkc=cNy+?XB2.*/W8!/$uFKJO?6HqFAXmfJrN) jMQ< 9 В㙔JY:BZYN`I}xZZ4ֲEhq:^d\#n7nfm֍4`)!dz׆LEͲrL-Y"hIxHP}Hyf#YR:eBz#,ⱁd,|WD`?"sVg+Ǐ(NY'rRmv0c!! `U< 謏PFQlR+9KQ\PV/Qv?ЅSsʮJ6 f[,#H';mWjhSϱn[>.)D--0̬`HPN]Xc-K吒$Csu#х.RdYJ6#Y!Yp{M\+D]vpQ= }F1鋳O z)iGii$?ܼ"$y4 gY߁ࣃ/1 :{R1mG>?> *\rD9#om0^6(YҬ'gISg?=&Gp*# 23T ɀ"i"cD|9V7#0F4>&EV WxI(H{wv 5;,6Cn?ukfa0)(F#eS>:1Ķyt9crp 7dªdi:pI30L$f&'Zs.*VHqWǓhX]{M[*n)Zr]Y/dY""Q' 5(9Xsv`0,sk:poBb!HL}1>O[<}?mvâC61"϶'V7 3l5E\JdnpÌ7`S༷-b:\&h.0dتQ_,؈Dkxcf $)DjȵH yasJ;lw*sj\pXR7HB-/Ū@SdF_19 $7 )AP^F53\m8KkUU7= YD>+;>lrڒ`_vG@awݵ=]\&0S K~%߾{RyU%2:ǘgF:(5N'c8NQ@$jKj/+CɏʏՈfmI8z獙Nbsş7h`0fY #ijp.:x`Li$8$ YiQ#υ|`RIYqp0pmqwqZiWaޕ63翢 m(^$$AHv`H^I,[eZmxڃudŪW`^yEVD]Wn,HG$' 'Bf[/d4JfeF4O14 Y|סVU#u"5-KRত BlDU]M '>;P"rB>K@`]\&3q8%S)N#,h%U$&\k8Gl)fev2o.w703"F- :rm^&e&UKti)*9BEJP@`h@u_ɚI(g#b4Xڅj ء!H)) ̳PQ)pQjemvB V#^)Q;)6J4+څPWq+鐘WbaI( SL1LYaM*rѩ t^|6_̾CccBhHpĘ ºr:XjÔaH4#SO}Pu.U܂{A;4@BeCVNXA,B(iB( Zd V\M U^KћcBj9}ϼUwr6tG.&b {aVG(͉+If\y5:QTD MX#Ӎ17E0q>; i}Տ6?ދ{z]TBQh97mNLW t?C 8% ɦp 7V=dϰQ`.J:YlYl;륖d!Ҟ)/(tζnWFs<{$[=ӳdmςj@K-||R+> rfpڃͭ/@hץ;a 뎟 n_Z79tft4OZ6rn=z~h|"%<獖o3s>揪=xYt˷:[ޓ^`M鯚Ơcby[n]ZٴwzDQԷ'_ܼ/lLnDP[0sSBc6c2WzK'Y/)Z!͠aԌ&E 5E뷖;\6Fr]N7N9leԧJRB!5w/~[UV2GH`hHT)'ݡ<_ :;#ґkƟGS0=mQr$; HIiF']v^ F@[ ULc3ܧą`=8h][R%PΐrCA*zQqi'm2h/9;b$3V![WFdMx*Ylhp};=["½N2d|Oάǐ?sGi2uN^j8*H䨽%.[^YΥ'8VF̓TI΢`r KA$ M;Tgzm7d:Y!!QayƙVL˸۽':ǡ,7< .!ԯ>cq'>8mV8x/Yf6'σۤH|Rz֌RRt?קּb0V8qǡ/?F6}TmJ8|E(>ǮI#& ?n'߫L+qpݣ՚{Ļ(.:ˆqq%u&%销4S^ЇҨ٭韯A/~γYuT6D4$/W.;C$f;D\ Џp;+h0_ U_~~v= 6ެH$g+n7q-6ZJ.Åg}I}H#,[%$;rp`c }bN2v !~9 ,I"'x׋?|KmBѷUmvundR|4+~D%/wi™m-;m^/x?ohqhڃE\gK$[Nӟw3O? EtqSh<;mL~V? k_?{ʞR$w]7㯋O\f=\ą5&1,!'3FZ`ϥ\)RzAA)٣6Vhis4IpK4,f 쭒7A$&u"rH=ɤP wuA4QtnQ$er,:Ɯ Đm 665ps2FX_FaooX/0!#('9D4Zy!MZ(F1eNy"Pgj'=]Z1DR5ςBNKtG%.,&dA)˹Q#aP"+# | HzU jz;im~eK!%AE+й-Y^aI/0m9gKUgŵ H *6˪;,5h#eAs\0C0G 9yF*zDhl%̢ U#2|먑#xiU HkDd8z%IJL 2wI%= %0zk0 {+jQs*sNлt@]4muwv HMNlb_‘E! 6*b]ǍU(\}]!2E&q T\d_A%w !>1O: 95<vJ?BA4l R2e'.BL9sYF2JmdO؅A!..0^_?IѫQl(Ik7_FEH7EnM/_3] l3,)GoÛ7MZ/!ڄ@*|뀅T\z 풷i0\oזɀYlw4!ްM(p*eY~hh dc2$574Ą 1]ԪI%=U$õ#'x][B#Bn]Ô 6TBnl%tD+d]N*BEƋ")oy.!/  քk]N^/2}֊tLn+Zv! H. 0e& h B,0IYnPO88gmDe9h KL{@cv xr&%X7 <0"ĸYQkڡ畖a|uŻ9B![>y':.]\]QO-\|gK\Lt׼jNy%}Ao+b3 'O$KjnnJ6w6=TS%BԿQM])TC_Ėޅf4kˇ%HbGߛ q|e$./ ma@e(Ze J:sQJǂHmd:0\f싵Mz޵wDo)]TZiC}'a<MA MN[Bhh%sՍZfEu|Iт7dA3͖7a«/Zisk/*Ke;|3;sM8t__:xa7 u<-?<6WcO YGjm:)wJRnIb[-d5翷yVtr1oY Q#a~|^q?,ue߰#tAfbG{mravqgr{ͽR&32{qv[tlک V2C_MG'SmY}T,ޑQua˪v_ם+[jioCj/dCLE0Lov?#:?qGt~zBY{ A{AO^DŽ{Q4F'1BQii@[ UL %LgK "Z8bRz)U1 7O l/"6}bre/b.#ZY VۻE>-Q孧^(aɆk:t]߂4R䤥>]P 3KFMZ ȥxcT~(:Є)^0M Gҵb$3V&A.Zù!H% MB0VNbpǖJ]R-be]ȟ!sQ 2pg{ ᅓ$t䨝 ̖τkNt*_JwRG.D' 2Ih=Fd0`)D\D cYbYBex/\ĶN2{3rR\$ s3_#Ͻ٤?wkXv|7^5{?]2}}bQ8q>FO/F3^IXM(d j|9g<65ipڕoU&2<eh`gfae1%y>dcDһFEMF.?BO{ξƳuX.D4OoxI}#g8p4z{v߷ )L*/b_^K_-d~HeoRz$Ǹd툽=7ZJ.[7 ?{;1w6 m5KHv`cɻP{e36=:\+SC[鲭ι~܌_\$6r- E\{T-}|odζRt8)C9oχS u-;Xmx]O?ohq iڽ_ɖ5BQ})i4q pS_>oO^V(h$M*?{Ν\k$dhͩoaf=\ą5&1,!#-]0RXjEtxA)١VhiSTNA08b-(!FRZ$By o[hNg}b}>)j4tx8}k.8UWw0  ֊ 01 ƨhtF} yq9l3AL,8 ƥ7,f  Və B:;Kerks2)/<Ǻpx4QmIʄ>Y ĐI7 <8m$65\|d}{E+?@b=*7ÖCY}@͝28AHV֋:{#Uv2 H=^gX8nɅspg}|Qz)t@5ς{!x2bC)T9)]h+A -q4d9q a$ A&u"+ mJ'|Vm8;Ya-ZT[zAALcD^2`DBIY` Wed9hc,psJAN2J]Y< JYε 1B1COp V5NO5^q'I[Ҽ,aÜ-J'|"ZxP\c@XNpT'uYV&˲`rL_"|0kr\E"LxI!ͣ9 9ǽ.o:nh76zr*WDe>Y^#;x+%ӪH{DdwK& +}5J:Is^(cNczc˚'P;[4^O3%_I6Y PZ QN 8Hf_8%s0ר 'L`# 1S6 YLW3H5#Q%Fs3ΗY'z1;A`'#W u^Hi8jG0`)jiՒP.W{zcoelUH+sY*Œl~<+-K'A}S)'6kb;|OZ thШ` Tu4ؔElS0n4nH /Շ 4 n|InX^;`0ݧ"^Ql?5G/9ts;XICIVY1^ CPV.JR@CJe1PA=GUCF&s털Ն4ZB'tYoٍS:y~RԍWu"A:􉗓%ook@.O:4NA(CȀΑlp# Pv QkiRd{Lf(Xυ+-"Kʼn^dAc&ECUP I恄>%-lKIRd>~AFӽᝋޮ|t &pC >3Qvod[{=tqu\^q3?sa$ɱp-`޵+鿢]`:f `b>y8 Xd1INbbc]lSbĖfwWXF<< O"Xǥ: "өވb "kujyR3) $[Q qZbZӦ躅cXTvAii ȚDKbC೰I07͆s`ҵ)]M/ܼd%9r_$-܆"? MYЪs=0x>A~7 '( }uGGNmsFIQ$bCzC ʨ-C-+JX+6F֬/-?WfFQ'n(;_4`,Z:EBLQ$5)hQ -NT'QNZ k m)42nTFyyl8=Vuʝy5NoNj'o399Dd> lQEow ھ6/g+~,aM~!%mu;0R6)0'B%bϵp2K 榚zWr.›rQ YP[\.%#Rtt-XTkl85n*dӎ闚a0"pW Iz޳xLnٖy6A,uV84]:.BCf5}}GzxXEGA& f{KR&+|Fֆ3S2dTZ?NZ#BS.mv'QBpF!f]P.YAL0#(FfoXMHhWj8,gur|n6)5o8k!K`rd x`XքGh*j\GݖBARH;4Ee3DV`ݴDmd!$9H)L6_t+W데D4ca%zBJ ew%F8XL1:Q>W`tI0vJҊBU LޤP7u)C%jtMmF'cQn alJq 5wmUg|%w3uiRg4*fuBZ6r K9 K'8Kp Q W XaK&%6N-b1g t$mfQy$ BQHNZ5C˧rҩxAplGzEP^~ٷG y#E3L%4^'Gv-x8aGrdjtҨT%kS\(3#2р&(`Xm6nL'M K[RN&ECZ"a`rg@gIPB"l%]7R1{-)y{"*Ͼ1^()RS*M t0R E?lڤi`[CLySvھ%Eٜ$^TIyzZ\؈]uPd Ml~=ա)Ӥ_ErE@.ʨccsOu6C5Glz狣)Ȥ<JN{(0 J6QAPBR(40-?#9(!X9k{Sͱdiu5¯Ҙwޞ,1뷣\ ̏?6nɬzьƥ]OmREɯuJnt;_֞OZNMIw?;w$ lt[v뤑YC>CԠַv {՛>z@Zcݬ{v{q^uFדҫޗs.FS&Sb=aL_jfW1ѿ~=Y;Ɨ)Z1ξV+N>EjSCeBiV>硇t_HgoV{dlU'@ J|8wV)PC+o><}@-~{+(j%&0Iš@âqI8OKn!H/E'h2˘F*gUb?U-oO"N)gUhV&{kVh}Er؉5l}/0]s|Woy m9i+Y?ftVl\ W62oozyז ^'cf+ʍ  _3u:3 ltҁz vp!`&x\N4ʽ E$"-8kKYX)U>(2(B[:%fTݷjն$A 1lѨu XAj]pH:6=/@ްʟa4 O[RIq `!ZRrE*k>:ƜM9rtf PF-Έ4 reo E*MU`4K6QuH2&dfZͨV;i^0Gs *SE/1-/ܰTT$sј:S6rf 8Ųf2F˶SM|4ŪTY\H :9 S!1sB7GV,h}2So5r@5),FG+$~dQ h5;*$h\yn{q^ϦI7V5@JO$j* a6y`#4kţb 6*ײajY.IН/Ad+M@N':4Ru_Ғr~=#׎T#lZπvq􏭂i.6uz[:<-YUڙ))r4Z/lVY31Kln FTɏ.oݡBCdh=tgu=νG/4&9wF>a4{vRu{Zo/-īYW[J-SKUb;'Z֏ƨ-ìǻ;׍Z\ʞeǶ~zDGH뺨u,}<-6Cg}A.$jfkW8թz;Vj?`eiĢBV-yj,!ʮHYb-)y [ow;YS=C+bT m il6T.0P6ۧh^et_igjft]A2}3]Jvw]{gx/'1￶wwϏs2xO[vrU&nw=?|xUK%z@-wrG {>DB:̗в=H_) wc}a-_5=tnWioۼdy?mwZ{m_65 m~m3#Gn ,u' c'-6uwVhP ;O/m:5]l0KӋnZ~墦<[پjUEl2E{O8W/o0l̀ 8]M9kGEdRԢ2@5a()iM`4dsζ{z[6c>VP-[zqSo[gW.2@{-:{e` xaP,>1݉iH:{vToPR[ ?l4۶qFo _| Z=sBQcy{WdYΎ6m86wgZ8Bc] KeEOgTHJC25)yb&gduwU]u54"~l<2IMH^>\tii<+noc ͪ%M`:=b;h`f5ALWOW[ǫ+ߪ746]RL ʥ*΅hu9uMc[k@zT 蓃h&Ʋ$U ^ BLȱ4D4)0aE@RiF) {n}hwmt%R %봕z(%91%crLyf@ Se9qg\OhoF~ݞxsI`ߧ) }ЉL#bTBs"WєV<+ڀ92i8-X%% WMI@MTą`QU !cIJL*&V*Q[emv]h=O@> $gFDTnsCh260$]oh!)6Ǘ8.Ts c7kCm| \\'/LĶȌ0:;Gr8oqUv:;'B$Qu`> VYgjFo51ԝS(";}~]x6 ܁qj&%3Gk3WoV/Th< 4 @3`Xd\ LZrѨgAˣdL#k?r˝`:gY1c"." K7+AgC@ ´H;'<0XfgQZzkc dE5ίUW5rܮ(s 6Ӗ(O\ؖ.*͋)&ԑq8;{,nͱCG'BG մϤr) !o#@A w^T,^ DD |jH$ ^Z 57^ω0+( N%IQ!!WBcB"ȅ"ˈb Y`,pϭ1&dὦVdLOdFnH^@C:X? _? Z vDǔ}p2;P\ eIʡ0ΫеVJ2;s# z'?HRRGŽtfV20 ,kY* ICW4 T)62=O'swd7BNbCG1O1%}t=m1M;'㗛A 3Ix۪dUF.lY]ߪta\;L{HTfr+)vrސ \Y"% $HTjarr\k!0(LeIm[ ǥtޔlozbx]Y',r[6Mkgԅ}חaNv3mPh2IY3s3(dPcPr]S\OsĴ14=Zn>Ox>]alx{ x6-zNNYR"Cf|)ǕNGp[,F39r K2F^ֺng]aTF¤ֻ B &10X@T^)Ќ )L`kA`VyH( yyF cZ3>x9E4]L˾cWuuV5>9@p:7-/lAiF`+wv7)3^JXCZ<-sH8ZQM jgӝKȐY]ާb ݧ`T~,Pޔ+'R1׽O$z>igkpi_h׺@}>:)C|G71X(mPKlP;,mmJ;Hu"HZA{ffOJK?k_LK*ř*Y架lMDt\XΘT.t%xim ]\C*:ey]@ ("]fl] (A`#SȕNtH/uNءT^:mExB 4IoȱY~dN|폫"w^^l9WPTs0?3f!!- BժX"znITgEմyniX]٘ 1|-`_߾{G#%-pj>UϠzJҧWƭz̀)㾞@ʟGF?`EeHҌ?z7 F3D!< | )rI zmu?˯fdFC-JߢWɇk |C\=FsK߿vyʶHb\Fǫ!W2K*u\jHqT.i x QLEا b#{,Y{.]it8;D{дToL $fWs{_/,Ds& Sevo퇿2jV >/zKw" 3``yΙKaGgpQĒbFe#9hܐc@ *e `a_ m4ZLL>[Y?/HǼo_]6>-«x>^ͰZyLv鱽̟-7bf*ϥIs`=[3Τn 3 N)Q(Lg%Ahf- . :H0fϼ}E'BtiSrFm kGVWFJpuɥ"@WwHw{Ƥ&x9$H{ehwg O(spW.܇.e#Ƙ(+?){lQ<;MTm~6\[;' (ڛn+.@5v΍lyFsD*\sAo0-<a\,D kωebJe ,mmr$'@W(7H޵dE?ȮS2,0.6=z&&썔+Y~5S aIW=﫪Ue*T-҆iN9nJKo!dRãBcn>2*ּk94gk3 ^Da#.FL~b?[ VN[䐬C'dɐ"8hJ:dTD>q.k9kqTer,1ڜ1<'*JITESf-\ 30QGİ=:dm4XaSFVCIq0Z5Ti#iSq%gE<%յ.fZ=e [A*oŦ\ "7F\r^$l, @ @ב%AMHTGo<8W*KIP2L$Et Ѹ hԜh42eOcK-OnEE DG9+Cن5`;b8%al_4\#Ȗ*}wE]%IOj.6KK` OA:Y5 $W'S@y"8(=JH^3oweXs6 |[4qȀgzLx GYt%E@qR|#RT֡:VmWD"Dd߇ 3w3l?2_|Uvލ^s=*hʖG;Zx2AsmZuI T#xHu@Ky@8mPzr,<+ :9@mi>*$\U>8xbgQܼƁ|ƂB'P$RDM+WfDS'kp jg7|Oy X@F@VOQ)t2< om U!c\W$ 8UQૈ;# ںM5xYN";>o Əfr*󨓹hO|,Ʉ)V%XH;Kr13EWnBW,xҗ 7<#dK^H7S@ʀv0!%u&bI(c@A@/S|6ٕ;gfMD0xbdUӒkIpKQQ>&U%-@ DEl\j%ϐvY geLkJRY PR]ndx魈XSmX@K]g[$,B H*/Vm+* 0dJ-K6ѦycFm)i痫ϹI4ܾT2qf*ZB6SQ\ۘLGkGAS]aYZkd<JV ]"]DfiM]aFhQBp 5ȡL!Ts \A7"h%\ UFAv2*m$ ykV=&ÆY|N7؊"QD,R)Pi*᪝%r J~I^2 apaywdZ1T:Um 3_ETi,W쑪0I^M[+~T lU*#rHW 8VתT*|$+TGQKLJ%L欬Oւ?!]͏U:wʷҸUaLf1H 0xڀi#b.t RtiYjaP/ZCׅĴ~)Q2r|d8NSbTT9 LR1ORK[ɻW .:㽱]}6BJMOӥ>0):&f$7f $!8@T+9"Z`4rˠ2ˏp]yuEC@tNViHj7{qZ7 %&Hy SyR1GF(LZȥ9ˊBV*\KyL{Y;&_Er/8;56^Mo#k6J^ho#[m{2i[zsj#”e,M4J*hhw峾d='+;6ɢ翖-Z=ͧf;J2i7MIɥ k5V$gjhMUi+_U `b=#vfbw 'ir*~ly^`䷧̜_6ӊKǝyqt={ kgp唊 _:jb1-2s=s_{gƒ"x ;DL(j[z bQiW]EL2**%#3jO}Q<Ԥ'R6_YoxoU $ hTVd^q´'Dʢf+/N$޹[cHF 9a[%(!# w͖LF|9PQo!n Kc #=;[3 O;&Y/EP&,_,j͢uhJ:%o]/WTYǮPI ŽS-_q)n` o"}e|:#oS426os_|ǡ[FR[L[rMx٣MW+zkiluoڜۺY\~4?VdſA=iZzqk?(|Z"N֧CHvD_p(bWћ֪<YC~sIx!a,+tW p=AJȓ+q򖗐I哸^◓eIk{+jԊ{*LXyqY8_7P~@v牺l5$XF[-#8svٳxsf_?];ߝOwo -%kUsM~|mLOw '\}vˁ/pۻ .'| 0c=9.9pÓnWN߷>;+[|g:$4s>C)G GGsEsGs mhIYIzp@ہGJH)fߚ]=nvIU,=}ם{̵>|gQHBS/w(ԒxIY?|HͩLxg>w}^ͧg>_ʈF$C>\M1}Qq?P_,Q޸ iW#SzRyzB\HDhʑooxة[qtVKthmΈ<"[.j>/GzJѺ;CJxAJV2g[KUn:۴!7J]Ϋq͜#i;͛1e:'>)_thVM]2V-eY*)c6"FJĒqFj]ă^"O >9GS.JX#Q(xFIQUUFj%ŘZGx =[˫_^JttF[?M[[h' WUAѐCvv!;;dg쐝Cvv!;;dg쐝Cvv!;;dg쐝Cvv!;;dg쐝Cvv!;;dg쐝Cvv!; `KdgaGvv"0FvN/; + oQv`Ew"]_^?ZnGiMҺe޾Qk芉Rgaٸ h+5jДt)|zfc;d) Gl?z|ni5u7}חվGn2@_<|q.zajgYFnVST:*KELFFyYPje|{NS4)w-z#|ﻺ)/EJkZl>n^1}X4y}߹G}6߯ޯ.?fZhxm7{W;RNs^~L5nIw>÷__5as]7ǻWm߽{xtϠ{Uh}{6lz7sVya2oqu7U3m_Գ|с=lh"VA-/n=yXy|Sh8@ӕ˖Xr!ʖɍ U3.E-=U=e=}zDq3m'gƎ_Bqmw؃0V2:U Y:ҡ5 Y:Y:Y:Y:Y:Y:Y:Y:Y:Y:Y:Y:Y:Y:Y:Y:Y:Y:Y:Y:Y:Y:Y:t$G S3)kkJʡpʽO0}Klidz}Ɵa-ږ`{㭧SPpFk1]+̊-CAHQR)<*Ⱥw>l)on 8,p4)mU-c?Ov O vEl7jDDԱ$*~ta6U;A:嬤^DIt ^=F *YJ\™d%5>LNj͘ ;S]JYKg}z<fy[&ؓŧh=~zg,F_Q,4"i54"j43IQ~ .ո8_nжMO~Y!%;8Ӎ6IYŅrcD/v=NOge޿&T҂ɽmk颅Ivo06Mck6|Lo [0mSr̼qqq KI 1>罥B$2Lye,wVz=S5:rqa1QoDYֻ,扈#D hw*3Є2L zW) -#Vy@) IfߒX! zAkML]YK*n&m"hr'e_u-)j%K(AcwvFICRcck͜ %DiPS6^ bk} Fǒ6ྲ-ރĽ&sm~n~~[B{Etx=Azr0?LY - &=Z+#F[)B3A*,3'T^ʇ6wTn QsE deD(-#2&Xt笉HQ&c)!>rgS\( ea2|<6e/ `n0"sb:eid4b_9ZHH2`g BmjNMfiҁ£a 2zC͟_;N5e\B9^%F: ;]+v;a랦m%#j=,h ^A}lߜ˿Q+*/)/u- &񋔔a9,WS+pߤY5ZjJ% 7YXsS010!4bc i=\x腇^{v/;<[sŸV7[t *cWǰk8 &"%UAP[@,sUZUVK7WEJ݋ߢ2VHC\J1WE\}5 D_*R2 + Dկxw.n)7mbxӊ:''v˜QFCo2o0;aG;\29Q/xnz .~5fkٵisfHI L_~OA.n20ܰ0.[9zb5˓2+eM9a(ZЛ~Sw`.-tU,ℑ~?TKaV{go itwwO9tfwtxvf!l墈xs=-xG;s=2?BH=8yc)ű'_4ebm^r`җʮmSVO-za&txn-Tmm=7%:Xu|iU ʁ S:7Ѐd^{ߨ.Sgu*&L@a3_]$N_[5͇KPXd! F!PЇvT!~bgKDZ %7ReY }8!^rV tdܓw ӶtX-2}KrB|Mln7,x}Ab.K޻{cx%/W.-0+כ^oRoNt}ޔWu\liJ8,~8 P8*mU-j?ܺ}D%5g{| lw4W^1_O1<^C++V/^'~#ˏ-"}Uc)KA=iI .dն+:=o.œ[srk!NjYoq:ZHKw1RsG3:=)Ia|nq!~,I=YJfaR ҩOpbk|tChqA,޷7GgJQ~׺yz-4BR"{#%wJZeoD|S -s7oqV3lGLd GtP&2cqp-gA=Mue.J;,'~|Dj[osRѶho"{ySgiZ|_?fwAUD ^yy:w%[\}"Czy-K"[N;E~qSh)p)Xoz_myYǣ}0oW2aOSֳێhN}he&#8GdYB%!<1R&y%xJͽ[@;M==G̉YϤ$޹9 e*6B߶.d#A<>\%|Vj2q$*Hsbg  6=R",؟<7q=N*ϔ*q2Q42g3 /9FAIsvzS}z D0,e!PBo"G&Du< TpuOLƲA[]@cd^F_5g;:ȷWψ?~5\.[eQ|Rd.U=MSRً&Ex({ͭ;Rw~مe!%)BА9Q$4cԘS.Fg)ָpc@`e)e' Ec:F-b-(wU΁zv~ yZrɶ.D%0eELMܣ!)Feƀ2>;#K*PS>gP`_Y  }INo6``Q) Dm0+f%+/_GdZՠV {?vuo^-ΡyY†9['@ n+sW]t.%RzQp~5oe:rp3$Z5Dₒ 3j ѐB(}T{6鵭bDVXa%{(_59VDe[ WBLiUDTĤ9 r+dpP)x!e^=-KXEZrv! tW&MdwkDa- =7qBS&Bl:"k{X(9UArU*0S<42722\틒q} ܛCe7''*XǔKo9f^4YB2D;@\ Is%Hn3JmDNON o=B &0d.c SkM7|ڕ3 \!]x;iUZKJ`ttsH "Xt:CtWB^"K r(p3ڐGF?"<& ,RhLu)TJu)w8>0]1=^}%jM;Z6h/˗ P5Hu$jB#@S" 8cN{z" UwL_ AU8 *EZpGKC %MfN0IXx:ǰUE&B |cbY`q2k)U,Bg AfhD]צpCU# }NM]Վڕ6'KL| 8snLp| fdӫC?3/2wyx35~V$Vbcyct1$а~~yʉi8#)#R*ʒ)H*d,0-w!hTLNguI\P6Jnʲ!"'$xI"9AȬ+j90k$N_ |>!ܻ PȐfQ71釻6\nG_u%<<0 cN+*$EHVdASd2:o eV@`Ʌy[tٚf6c8WQ OJ#Mۮ6gx26:W_.(wgJݐ._t5>u6qt=[꾟Vp(%#獧dZlc1ދޡ?&O$>.˓jSOK ݞnN^&o="92 S@~E!Rxf)9A>j]<&79Ozq2~?90^y{E^ ?'FR|d2Y*:n` ^gFV=ҏK}+ФET YA)"@zcPZ :a6Vv~ 3W%{3%'e]:\MN<,*'XznOʍfՖ,Z}}]$6MֳtqqxwO%7.D{sg̙[aAԆI%%Xo@qaTL%Z3$6| ]@,B] &C`匬6eLBC6)i,A dj[jowW$X[X9-<-\R KbE:d)md˭-暁ڂ)I!FcHb,0h Yc5~_wv$:AN 5,4u*PL4@Bx&P)RUpIsiHE1иj^}`˨Bdj!hKS$MJf]()#)Qh && \I$je Ak|H'$sZ"Xj{MxCB=eĎc$G6,)$D b9$<rAVrcj_QIS$Ej%*fJ4|3c;Db!RF(c{$c{ Y{JWuYfm=c8VIv֯g^Z>|T bB:cj!!WR5xt>=݃v.#+_:FtJIB,F/]>N" =ݤhDoGZȑ5*<hD)8ǍXb#&Q ;Ђ!Gx'@5 >T5&m0dGh6Qs :T~v|*geomTGi#Ar~nz5ۑk#?يZJnU؎n 8Ah(S#e+D`Y!gn'1N[ ME_"4" >=6Ux{7P40EqWKͯ_5OU靟f 5[I+h0n?P~cHBUr2̳N+zx5z9~F\USOң{gw9n\;sLC|-V^ G&  {cyp=8h :gISZBҹ;?)$\'$/kPLe=[6 }ށc{\1KDNb%xԞSN{=5Dվ{jHn׫sO eTTr3+E%` Z \!B}Le'XwgW)nK>  Dv7ʧ+?4#a/g_+jsomu ޑhhg<ϫj{||=K׶#Uq0I).֊2F͵WCCڏo~EI|{g9ٖs:  H&fY?f w8yo3 J4OdꕜVY:F8.sIf,k1!Y@J?Su՟<TFT?LgĹ@p2ioW7/ڵeu_TGIǩ@/f"ϫ0B7*kUn6CFɇX2a0Umg}㍱MS=X&:^ΞbZ&Trs4Ş)Jo1jb?2nW?* G]g6sY:*k] %ߘ[0D%R ·kVf蹋j-6>*Y_lВz:jI0'ez74^J‹z2΂ lǷ[;cwvSڧT~hV,bsr&1hH \x^[K9#A䅴E.:Sg}՝`[EZ6UqWz".\΋z{g 笉0Xm^aoƹho. >!Z>䓳S~2@f6Cߔ:+EgzaLw$8yi |ӹ#ΗG|_V 6g۹}Ѡ1Ԯno[ڻS?7pķYr;M|x\?^;^߶wn GmyE srJBaxW%fX4:i4*C(ÞQ$ _lp_!\Rl~u^QHh&8ta\[#ӌw]ݳ.="j35k5dH7 &MbMnKOyGIl6r!'[Hõb(7{ؒo \Rf&XNf&LbR+hf*)=&<5DŽNRU&sz*S29•FuNv p~x:ezZMz9NlGύv&*"%?'TYрřF+99r5ָ͚6zfpT~~<-L?ZD%eF0m0 "s5\&ͲWc70:{7@FhOqs CTA,(%i0gM5stNR{TH)'GnCc7-3c)2%柿ͫ6eYY#qeK9_eZ(v@F!,:0{8FaV}7 3vh>Q(QۤTE/~?82σ3o!TB#{1'(W'yiq5 X(_v8 Wז^wʼ7nAL& ߃ H7)E1PƂVg"j 9H_7(r|oxp־ŬD-jAeզ61iPSmDtVh+82V PfA*yPSQ0ko:2j )Emٳ5zb&.;uyi)݂0| `ni_ $ܣi[3B-ֵ,̡+nri;F3>R *&!`" %A qyRT#FEŒRpAI`M<7RhTPjG/zjUï: A8!xb=l%#1( A-F,Z\ާb꧓O*|3on+ Ǭr[Ӱc,Uc9Y[ͫ.)v1Zj5{3)څG(aS*ڕɥP]Z.=ڕzѮ:8mvTNo|w/C?t1c_>D}@{eym`n-i)\ՠěh{ez3gfW!CB%:F G9<!x`DB:Qu JPZxh {gAQn @ьKn d{і"gƋQUhu퇗[ $ns'm5)-=2BI7;/ePAߑ yrx2[ 3^SQT~GDHbrM%cCId@b `RE{E*dp604gTĨ]9 O!.0&G+I P"hfNc6 xmJw,EjĆ t}Abܱ)j¨-}`xmQ9g(EFa.=Ii#kEEw(&`$!eHzўGX"EdQxAHiE:j a1rVa?ѨS}Ab)"ˆxD8~ @8Jis܀#a,eވG=zLh4>8+XՈxk0 >:qɦh >6޺܁P̓O_'S+Kvӽ;^#똍Dȹα(03F~? ϖ\۔m%kE4P|M)yº2 ΂tߎ1?G6eyJS ЍRhH -ϘWT:&z,(Y!aH‡Ċ:$5 i˵a1DRL1BID`SY56Ws}9u`\ezb&gJzGpZKvgW{?ќ[bƼW!*QBPLSUA#M ٛFRUpM7# * .Z F )gm4l4ۅDJk!!ѐ0u@1:ID9O=Χejk8!})tHS:q6&ʼBۂƴޟ\((^+J>}Ӷ}Pի˶OĮoMPG$ {gzEuu)8hGsXֵWO.":4gK,z6lD >-+. )%y=ǚ^]kt`lӡVP Ih"߉g-N1W_W{~eGoa_r&7BN"뻔ࣃ1xCL*M*9}^ lpuY ") >Is_ήgvdw{ηx׭Nj,\M>OtKr /)e卶>̀bv],r[4Sl9EBȄ50PApufHDK Z)u2*NpvsZx4Nʦŵ7_wm]gβ|%˒fͳmp:%>KE`+<{ dto#gEyK'F/!ξ|B$ǖ=-6YWrQBZ Y^ט[{lQPצD_+9p-loq:~nz^X~}ބU&AQz7-r#z~ӫ ï Xw~&I&i1pje2Pk Zg#cInUz%H%+lj]~[RX4(q`P,*\Z,AAR,Ǭs($ BߤB f E3X,<,y=, *-h% 8N& MCNJ"!ZnoDA 9tb  X;ܤ4xiWX_V`dC)r57Z;4is &bWF@4"ZWBǼ0 lsJ,7ٱ r_A+|7d(_eL2'&{FJT 6L)o*dCIθy~ⵄ۟WE;ɮs*CQK?zcNee!BZF뻓bOu1?Mt[V4v&m 7|h~aN6/͟/W⌾/n4`#ij!66\0/OyfjÁ'ʥ 8!hFfէJ$/>ktpkLJ)  ,/$+΢k8ttmn;+t?XZi_lIy~~czOfW~mJ}0Qi%gr5闟{S(ꩬqQGػ^C.[ˣLӇ?=O ҦUd,>TC!'3nZJt1Ztbw16bA`KmQAU|H68R!M)3zǙz eu΃;PViFZ*p*dMA'/28Z]Nm29Ib{IOR%u68ٚ@b`y;=FxMc̺WCVJ"8/̐lQDdB|qtB(dGBW C}2TP ]9p,8a"c2 SR$Zt.#=u .d8'gAt3C-& \p ^jRY ؼ%ς#a 7:'z"Ww㏽l0s7~aAIyy+6~VɥnGU%>ct~/^Qq/y0Ƚsm2˕Lj i?G릦S)I~6n:I^SlGfMߣ3F,zWg"a7^$?गB7I=9O+mi:hu_m:n1nDkDr̽K@qU9IC|\?C&6 7T/C*{qidONR?G϶g } tlwz:zGW,p=^K"ٺ:/W3g՟Rha=*>^s/ 1 x X%~RZn8We!#X_ nLd?3 gQNUVZ%&oD0R HeTia-J!g]* 8'2b5OƹlVSqpgǎǜJc4HQK$2 +6%9y]PJ[hM%}*"g7[I#\!p`4K^:seRE; 4SDŃW-Ykٓk)BtP*4d`O.,1ʀ^%,HچR+|m!ѿyrFw`Fj:z*fRIFLeF$c*!i.Z="4fQ}ы F"e>2-FV)+nt BLTAP-J"QA%s !J-c.{ 0P5 y,w@ߣj<dz kYv2HM^)DϗƝ$xy&}`NG@{vi )A0P$YѳRe_Iu)Uo g;lgJo!a@V{I锃8n TgH<]PT&s V%:l'r3_?`l'%2܊)z0j0urPr+Ζm9zduS.txڟ@QV࡭5y'Xל&k: <UyO⧟,M(A5E@xQ)6km A@SkoE:1G*b5;m(E(1xlm8{wPr9[68^0]j7Cx6T(YG\bv鞩]ϴ]/'Mݏtqww[6sl<2T 0'v_5kk6m{ZxG6!f!dԒͨun-7=5,=/NpK 9Cd!˿KŢDQR5RU2e!8,ZX | \mo~rޮN"*ZMQ67_+gpFuuqǗZ8s gv1sA8t_]:xeX+DL+Wc'H--6߸΢'i[2O7ױu(&[0wH_EM\C@P኎LS~5 3]x}dcŽ!¡G~7,l&v!b "q2mb5m2om;bSVT;3%NWjVnvSsg@y3RԸ8y N F_aq'0vc$۫!Ze$˕\қ:MxNLv >6ߦƞUxႢ{'rZUZrN.34As kq6A\s6AH"AJ̠w$(gW`gW\2pԴwW am?\eA \er<*vT.=•-\!L.2pT2ʈgvU&lWH~Ue Բ+9wWFHm=w25[MO\ [a7GDxwzd:4j48g>jJ+ m|neIH%v Gבܘ #n^ʎ0^ ɍ_>֑Kմcx&~~x\@$u ;SN(>+km3[64jSa7=^[|a;W%Y(ךY +.߼ȓlND.2nYpvTd a:cWOR\!\er_5\ vT#\1j~qmn|)2_{^*E#r`!:βje+"U(]7<` jT qdJ s7f\ F@% Zyc0%/%/?PwpxSܮ_ Sư:{g7r}1v )=F//vbKz^⨯Ec5Z㨺j~-P*6P\B U8XQýOdW*HbiKŮiڂR'AE[у rqD3I5T"Ў2 IhcHtQ!< )(Q4 Aznl ©I@e$gSGvO9T:931gݦNTdZ&z\e؃kR miO_ ]ĺd c"`b>TSiҢsjgQ?z9ʛT4]5Tе@ZhQJ@ ukiMRJyT3P$?V+\-AE3_[ZgZmp =Ce&MmJn?e{`m,%F7R(Py"H5:JiʪE9*Өh(k\Y-U1%T[YosԴB|w5\׀sN1ȹZ4id^Dl\o3TJo/5S!lmݝH/ryS:k1|-Wƚ^T1HL451sD Dy &¹5I,ou)#]Jyg $-q\GbP xgebNJP2δ'@uEb2DHG4ǟ?h>CQ 5PDpJ [ŊEUֺ*qQ6~kKb-.dvx Jo}ܓZhJ/rczxWU~y\E葩AST0&P5D,n94$,P9RVC}$&"e E+-&-X-quo9mQ* = w.Q SrIC]Y$WX^',|r Q sye,S+QF$g=k-w]lp sPHE18(|k6rv(Cd(]P`H(#gݗ"H)JDA0 BY0Fi\ exM(lPP t:KCXTԥ GTEǜf*7n /Ld-M"Pe]TWƨrcI"9*if2jMA6YPKt$FKq  7 WFgx'u1 Jf5Ơx"s1g/>Jh>WYWTQb2V Em+'AQqʺ,%}X2\z烟|)$=J;D"jgMzR.^G R1 ٕ&RW-g0?E#B'JuӺLKyJ@ "jC q[`& xǍA1X(kW >P1@#9XzL(I&I% vbl+!&o~7Y-}WA-dXל?pQbZ-5|^.RGL};`A iLܨn#w?Zǥvf?J!Iq|QS& G\5j3rUJŇFs /dUqE[c[ơS/"?.ˇ=Y񸫱AizܛNs]z5ۛLF|?TDñ=܍oû#9 qyk^);m91>R\M Pm+"dr$_(ו.* G:;5,NOzWUxuϋF 뷚^\j l &ס7p>}q}Dzv G<rc'S ĩv!ă2L,h5T"=&bvvį.<Ф]?϶Oƭ ƭ"~D rVzguq{spze*aY7ߴoK)9<;Ydjt~c~Unl5kޡe{ke-w3;tYw.OXS2Sy[zӫıMDtofg@Y-qA/MZrmBxt-8^454 ?]{oG*y  0L I~!)R!%I5lKLuwU]4@'oSYys_~=sv#4KgnRrH^<'r5&-8p_L?-TZڬ ]}튾kDgwo<7νAQ>xۑ ^NZ@P`AQ *d@IuIԪŒ09ms 5:AV80djϳ޺2O+BغJ!w\<)zG9M,-K cu2NY.;O|4>[nj/ GCLF0&k$WLZ7?f":?Dt~x*\*\tf|7pc'i27KW*Hd1YV$Z%6[ T`GP u =!LK "8ol@3@&V -kt=;#g˜؅6=3gسBeiKoEl:/ow0+8_.aaYVˎ_Ӧx6% I\}B Z)",ipQ,< gZ|VL/_uIivvk۞wF%[+Ab/Mɴ0B7Mb GBќkTn d~ {<6 E{=Pyw` 6,stF=w R{rTNBI͆sPN,?2 =?ִ !C  4oc4<󁥠B b `RrK R{;.Z`V*#&1JGgJF2-zrkgYYx5~U;Iq\'˕^U4bX<7n X b/WQ[2tzVUGqcW}>~j{ËV ),P.>oswzǺa1oYը{AriV )0yo?(Oחغuoaq%EVOM DڻSۼ MEM\HߜDO_? Y)=(sg$Sg[!Rv5N Ϋx4\U{~6?/U+)L?D~)/~z Y7m&)Iy"!syWB+M&|9Lip]oKIw` ƒwYL\#&2һ'p0WoE#Rpkf_[f;x:^"ȿ cuhjXu^.r2L\}wE,[jo3_ ~9uK#&lYLQeWݓ7U\&!1 H\4'_8ddp!F43,d2 ϥlWt h{񎣔У6hnC4pxo볆N}|P Ď c>>/6I05`Ɩ %%Y`Q2Md2#A=K6J-JѾdyq'܁d>Ap['Y̥ }29DY(RqJe'bk4@v Izi"3ޑB&)T+x˘Z DNiHuEpC2FiD{{*à?Kla˦ >Xŝ(@gHdQg:rN3U֒^r;;jrvf1^,J1XTVAؐy=#iւ(#'n4L$- iR ?YX:#gK9;zN<`l Wt!Jv,דq@C5R7,} ( (i'>䙒M4d(&AQlۏ궽fmG/{uAf$0`uk ^z[ R(&fdag>v҃md]zݛ޲=͠K &!:2LEFd:g.1C$sq%ln 9>&8u`h=t{]5U7}z-SUozQ4t_?ўPЅӪZv9Rt}i۵y~84 Nl万US..LwOY;mޓ[Czr0jzr h ^B-l@ovmP0(TPQzd:r֬Mp0 ,wFud2u,|njs\x{pqpeG{f-kd,ƨ1#7&!AV\JLE%KdZN]6;a#7&hF ҵu:u8nl'y1-N/v㬢} 3mw3ۯgP>|vE0|Wփ@ߙ^V 7?|7]sVl+;2TuPgqcet=^z5}$uU5XQJKTW4l4"r4ǢQɡGW/R]Y%퍷b)RV3Vgϫ7K& ڎrt cM>\hj[pF4C珃<L^wvF̵&erh)jkm:ɂ͌pBڗb21Kg.A,;aks(^c{15Ziq5aBM7>& Nzjiҹb1XVG#fl.eD`컘*qkBL+hۮX«r"뛓us"F3̎3>aBw' hu;6޳9]BĀt|5]4Z=qIԴԱdOz~e;n5dׇ춭-56,f |s,Le׺tWrD,mjd200YK'b҂\=fއX}';;ﬣݮߍ6B&st1e1[Tk l'RUBeL$Tm_MH#ܥą`P!xczg2)ejNdEmyE캯fWXզ$^by{ ˲-MA B.(SN Q[`g4(#`D@N6Ik-=2G[1|: '5n54v7Atֆllh5IDr$dhT cP ~ؘ-` 늰n2Ags'^8Pt)=1d4p =? =?ִ !C $%El""~ȊiQKR}TEQMiV̙fOLU]4; V9na`)B*RTWZ: z)IF6+KW 廏][Vtݤ[%~LFbe\kgo.qV\_\h򹍟М&ޣȝ !|(]CHxNz>eo ԻM&)YXΑWn4㒉5Rrl>lNǷi3,"ݑ'Kyf]z\'nKny~Xע'Uܼ[@$1ZTuj3 G_CT#:˞xVxQw_;bɅrk*W ->_c1vIJ%Z1Mh(5.y3U#y--Vכ/6 q}jۂ-mŤ-]* 9~Z4q43SÅH\8k2ad$}/i4HA WZ=YD;M+=4zFEpY0LY3z-8SD#8wyVF\$ lAR,{60 b鬔y4 .h!3B&e2LÒO/1$6@P+ P"g?+:OH˞֗Lo׬'Kl-{Bˢ(` f!Ӧ݀  s`KQw2Ol5 V܆Pʹd׾: 1Q(Q h#" H^,HMze-g9 5ES<;E}~a/ R&dAkǹ*K[ ,d ~*dzZՠV;i6} qBJN8aÜ Ya˖7lg-1 UO`vYP]T>r0\Gb@ၫ` rR4"ĀvY2eg}j,r·9 [(8ީ) mNbgO:$ED,bNizcM\E:qȱҡ*K7" /4퐦PZ QN C %(g8Yݰcڮ>93~C WUCk×sKaڻ51eT vBȦçxnɻf̼6r7?L'-6' 0_]KWL!}\[t:pwVC[zUCssrMS8#`:P,um_6au|Ù[Vi@A.]S `zʦ'Df8UxrqQb+[S\O'_n[1`Rt>0hV~-"1<^ք(1JU:1aLe^*a2T LEfsGֶ9 6J^XTߏkA,هXgazy=_ ˯?\2g6ۣVJ_.uѯU`t 7RnNrSA@6sS4ݖ,"Z4bu;Eś]yrrA2A0|9ܰ "Wg/,XP D-,}Uc)K>ֆ͗exp"+5t\n9듖pK Yd̲r;̌I薏d+熝N p=ԉkf7$uk˔-r&Wјxx\2[IX1GѼS*-օd@_Ǣ-|yq{K(E͆鄷㤂o~WBEQ Rt: ŵ Y*IFTT(q0}zx^B&kOƑed }G zz\a)$STJws_=վU礲/.gYn4 YLm@'אRVsZ!DYYwYGu/ãUƫ )^} ̆Z b!]H|:rkbU3:!O#7Ǩ@鈁9 ׂhd bsH6R—!F-ӻXBtjI]>熎ff>?b7h`5ÀW_z;-zV@@0ҧBl} o߿Yt-C&ӣCN''8;9?MH?LںT/V?}zmI]u{mgNs2$QV2BZH=rVxkٛ>|]@UO &h {HB wC^=]FZZi:8~5ZL~ޝE-ώ;l|~q-L(͢w5/ގG騃+GzBV77Un³-+&p6Y(Cù*:!u~'Iw-~v(Z,Iż B\Du鋫K /J\ ^ǬB̢XJ+^fh'֣*81%DA"j7fFKvs=_y/MPbM&s'b4r0Jg - EL:@t7RpUd0H턶DEJ\֔DBpPJLLB{KF,Fy-lS7l AI鏤$e-fZ+ J-he/rGh3}` lؤq(h*tBф'%:LH u,ۄ^9f"w"4~X\+[͵}q #pGG>STRAol0lP4,o4d*?@@"\H>7$zCPrVcN80 8#46k i#օDRk.Y*YqU6J XȽs.dhGXc̵7U#g?$B![y~8 g|!"8@>R ]1fWe;||uLh@[l)eYyIuN;|gs;x/v`R_c4yɔC.D,g:9el13*(4tsOCXAe;bf?7/ qNow{bTZ4𞖘םmx6gV8Ixd5NyNQd6{jmM|)ZNP@% 8< $:q85>KwB&Cd >Kua=!އHj wMoY^׭MK O!D鍶>*%lt𚣊:oql h:#$ϹD$% (TJrn: 279%A x:չ?| wcئXgnW Y8e[7c:}@\40n/bƿU0(UBD@5wC˕k\ʖ  eǠ<ef. aRW<,$c4RkkS v)"bDY"\!VX2Y!1/[`fq0<,x):F+xOՒDl:Twu7WjPncn?nz 8?~y. HdC,Hk࡟H,`RȀ_Qq9&.VߊzyD$FH%JfcZlwT<%HDKqZ6Ƌ怪iW'ǢeLh:(6Q XND^r.Ezٲk5RWweԆ*bZ%mBQJ4BgIf-EqMXb ,r*OMYrAЃӚ]dG974sgX͜~qjXXg싅P eƒoGHn~Y0'tYfM??\=@3#Hr˒ʑYspBHDG|Ei)\2yO5:X":+3k(OؔĒ4 /L. pN爙2O/evqb jWڪvczq8j-V"z$O(dasɪ̸FCѫłBW`r(:J45 Y)Z SFRVd+h|9jl>_WXZueD "ne@D0ShpA$\2>U2rƴQ\K2FRPF-븼#EUI }@8[n}Ljd_\q4\l7&'ieN#h(L fXV!QOY%R R.cCV[$z6o5*_sd_Fz^ת_Q;ZTjUSjUMV5ժX 7ժʣV5ժZTjUSjUb֣ɬx& iBsYs9ŧ3Ͽi֊ORRZ9Eof,+wcsV<{FÂ׽e _t?d(oHGС]·bڋmWwÛs+6!džnPhqÙ4<;Zrol_?>Z@\N}yQGsI#ꍣtetpy:_WR&ע&D^k"{Md5&D^KYrʹj"{-l"{Md5&D^k"{Md-DG~*dz4AO򏯟8S| ~OVZd#OeI܅FHY?N1gEv?;>,ݻ_nDzVD M i 4GС]6bl[keI"?`Àv7aiq~p-.a8 jyv@dr:-`Gy˩/O4{#j{JKzzr9]Zf ݟ{z͸S;^ϡF`˦T1q:XSJ,6ZyD\%LJ^o@z\{RK$B.I4Z,@dZ&J{2 "sm+[K[ ,30(Ѵ$~^%n|rկv̠\BKHuG/E͗9mBm\j~Moop[-7dz˳97ϼKi;9\|˅{|pY[OAe5l̚aM,]GIןTeȭ_&Ɠ[޼dyH݈cJ,< E?i..ROH)NtviC ^݋@{mr"J2k8J'Yfb,(|ّUƪ9 "=@Yr,q\8-<V"Ȭ9j옣~x:d여XM׊̨?^̗6g'}NDg6}-;j]q ϸ[Ar'DH^BD*NJ2% "9*X"J_1jCV@HrZa-Ao3Ü1 B0Y֖VYZD@9Y+|CRxrZ.؞Z_oQ\7xUyQ#2{n5%L\e#(;ҖL{Y4!OL_Rck![CLE#xClA^H#Ae9~i$Dvx6ZM&l0 y O>~zoBˌ<F7Je^F׷Z#GUwm]^xXs ʢK}ҮZ%A[K-=xNO[=ui9A@GC0W\3夐|ҮC.|G'vD%DIoSkELm6H LpQiF]v^ ҦA[U%L(%gG؟%A@"o}) P n(ɱ>SEbZ͜r.kl~R,9 )[QI/O怜-6S/ |;]ҠN0F*54!$U6tP*Ib"8 ocTs4ou:Q{jew(If <0&얃l] y4zU<q?}{c\g;.ݟ94d:J6X yP"ؓJC.[BshEz[oiI*zG$%gKhrX e( X%סߣXSU5ڦC ƞWS)?p*[[j,o'L3L?FMkmfS}J~˃G9 (=yת˟ ˊ \s-hwbslNlZRf,p*j 5SӿtP Z=[tvz~k4)\ -xX5>xHDn sIMES*,)0MσJh{C@t3>1' &\K O`AkC*iV\+5J&Yb1tV'abGd-=zIc25\|L& D)@(_Fu!Θd CΫI$,h؍{XW B Ȟ#/2z H#a0VZӹVL*irTX9IN8~ܑ,[';dZR'bR"s9օR%SA{"E-7{T;sRBL4(HJF%4^l&uX5@&eA5[ɒxn_VHF/] ebE;d6rO^5fΆ\ @܊zj{ 8Ǡ˚7Xg#՝P\B_%Љ%%HKȥ"FDXfQU@/b|Td_So5rl%ɌE|12 @ABY:EAu6YDgJΣy3- DYVu`c1z\hwSmo '5,!aL6yd#͓Mz[:A?61tI rOh; !uήKL imԣXG\vPvn]sTBS1M*fU Xƨ, y(i[ܗ B9 %Gpٛ$S€} L`B9Ufܦ V@ =ϥҹ׳T; PuLYOR?F8v%"3'ov|u=]}Y%V"qHu*fu? /_XCS^xß5,𫨛=/N'0>olݿh]cyW??OZcV7Uz'ů>V^],;KV2!ؖq|x!q_jd?_3 Ѓ B7^$]x_<+kǶo^ \7oa7G~?lΎNb\!`Û7OekmK!mWdx1ĸl? 9UgWNH!UJ)W!\y3+ \Uq.pUbpk]KbWs FME]?!;g&q"VBMAd1_OM6ϛxP *7 ~s-^ `ē>1LM\O wV링IiŰ`0C[t]C2Ϳlð1XדYz/jiVª(͗6 g|e(7E톭^OVC$̲C%R.:'^b- Ř 1&H cĘ 1&H cĘ 1&H'A qj9_=?责*߯J,V\(tGم jt:[rdW[sU^ O֞A–B&-F9G2J>iIEd={]{ʵ&FkJ՘ )9/|(eݢJ'#\R)b팕&)@Sf܂]!q6bonU5]0[ nSlI"}XQ<j3&Ǒ=;`yxvtt”K^t8 (Q:`ΩNL~DD(zlJ!yR>" #E5ȿ`Z([LdՉR"5Ht,RNhM}A0AQ9$GZc*1%j&͔2 Yd#]ݒ5hYjZ z"t$8R> Ј'͖ZOaZ[G||iAu@%&vA[Pt;ہW1<& { Sa€0ak1昬 @G eeg^亂uvhlTY`]@o$2VY5Ae߳l2.@6X@Nju")QL@tY`ޭҫ(K^zVۡW)ҿütžێ#~i \<Y7ioԞ)vwX|윬8}Eb[xՀe5_́BC"EPPt@xm Fg JeNluLBX@ѶDI{G"J`m(PT^4dm23dpI(\7s& 7dRWv4G6B9Ml ]L 's6sc;oV-@1)oIzLT1,Ll ,o=B7…05@EJ`:بJv7f g}vWeW1gk_w w}b =Q S=˓)>XӅvh´c[{O8":dxr ЄFJ9prp˯8(Pd#-B0Kuֈh!.*d!kKM)%9ZDK63_鄣5'B(@ٳEmU\Q1S&S[t6ĹCy-d5^Hpt)Veb\QS~?R0E1hH%]je.N5dHTZ9`H'VzQ+qnϘdZNXS3=ٳj܍p55[-NGB>ur/;lϺ/] /T]q7E#j9Q[35mAKB'K.I{ƒ ֱIP*ْR!N\* $COMœd9+ɀQ@c( R87#c; iƾXظR#>(n%϶:xG>vs2nVM~0?O"##LҁI(:6z+(l)$Clg %!:U_CuBB1%$559¾pH)1//?緕87#6g霋9nnzD^lBp@@l)$bk^t`k3kD 7â d! {QeXFfɔQAT4Nup3qnީʋQ͉ Ǿh#qD4^9I6YD£HeBIғIIC=d|VFC d!EFM &KJf X]v$Z,`ueV܌?djb\\vR싋1.G\67&xuMLR$#Ӣh}޹Lc)8m|\yc4Y,?4>bxsܘDُ6~Mbb>Eϻ^{W1B3B 9@Cɻ 2I l1$,3Vrv'%2֖ QXTus˨UTLtt5$Fc\VD) (¦bQR ,!0raBBm4ĹŢdYӣxz`Ea*Q'M}!oj>ᛩ[M3v\{s8HSpٻ6r$ L|_c;b?,:F#q<߯zX%Y۶t$v7]]UU2ZHu7>j F,\݆ Ic*jǬW>$6d3)cD.Ȓ]g]KH sXdQ&,jk\4/A)j,+x0BH9l84K .ӸfM$qPW.!WޕpʱԖOWUM!ɘRVYWf7)^|@"&cqu42v@26T7{]'C{Vu pr#A>Yύ흃Ni|aօ'Ph@ a4JKN ;\bm{4Uꮵjlap1jW; VW۾wY 6ok n fmά*۴ֺ§0:i05 .?h0quɼ!S3{4ܼ_rKO?6}/G[z+ћm3mwp2<m#XƣaP:(B9=zLX\cs:6*GKq(A YG2M*$!PACnäme^*HKVƁ2ű% *dL]7 댜wY'NƯWf'eN𞵓ǟG-X)r Sw|9L1 V" h5҆%hG1譭/Nw^IӴ[JgsW al}BM^>Zv^ k=oEv\,LXRJD$m:NMujcB~#&R1+ nxtut3{z~=pl' \]]׃j Dymn~IϷ<@~}x~z7s1=/jr?Y;W|}lRsssPBq{PvJj/dKX -s_ t 4s%vl?sM7gu7??_/ 3>il/1ńGs~GmOKrQTX'ь墥 ͔ջ +VZ9+'k7's ˕;|:"{H}|n.޴_tk.&>Ѡۥрqp5H\'}/oOǟGkc_.)nNg8J&Halij(aR&J$'ADw,~E}pEd8X uMF~*7"wH@ZcѺR X'LNcqN@C=Pԥ@a'P*Zz.hPH(zMga1:vvFΞ. ]ɷ .cx#̏gl<9Kfn|]ۅ~ogS8xkn9 D Hћ js*E,FxPyyƇLPLrP2#5T+8*m!iŶubDEI"F_Cx&DӇ=Ƨu-tUywST7Z7xܒoV%uU(&J^giB#Z`62$ F'G[UOy^\sL1ꄘb-7^i#k!4!gVdxtDTӰą3rGq"m\dFY=Tz$ƥd j|@!Ŕ5uK}gR{://ބ[dv_p`> umٝ.'#ԅtw >}jLI+r XA*2 [ *ZDJMLݩܫMCr_8$WO6<*ȋwebtv\d4b̌StO0<.|ry!jQbnXoyA{@S3*q9,2G*ƫ=,S|5t H% ˇP urIۉ}IFb(5HhEu*0P 6GoH Nn|1#߽}͎{lP|sFgZWAgmT48XC$Z9c(D*E)x+N^TN).VFTYxF+ծTaԟV[Qu[[\x=e(Ի>}& ej%K|TQItTQo&S[L}ZDA+Y%H>#X}f)06m>$c 0d3)/nIļ\3r?XiCҮ_q{-f ^ta)dC̦6V`P#m"$ɡs#sx4ix1eqkzgɒ[b9){w҅ $WP)NJA#?oq_+-C1҈ )º b֮N2 zBu$ѨkO6MeFճjzniI"I&b`hsr:9Ң$(D)II{^80bi!I!wbsD l3Jd ?e? 2}ͳ H:Fv0#SV7hZ}``JLNGs6:֦\*D.O?z(n5Nj4ݠGl -3Z/ΆyQN\z_Ni6_%hNeoØ>AUMc? ~Y9j߃a.9e<^]5=be7$Zdp0Kw'J҈lCn _*F_6T޴,ܫn7OH-ѹ8:uVkQN3^>04suGIL0KyHW"\sCy/X}4L!8fCrylvD.[gٓ\<N+1jY*۪AzfɹlkWfe:^>Tojqe>sfRcB]0h>05ښZe<<nCW_8&dm.Q_i1;ܽ9h|h.F#"s$.lYk5C_\&j F =YD;M+=itaJ( T"c e5-[ b>ɺ#N}|P!dA|'C D`BF뎣u't``0)`x,^̯{n*NEvo*)??7q o< 7{2`2 _ƙ [{v)Zyaa}XWc Q щ\4PC%E(ˋ%,ZMRɠBm &%)߃Rݗ` Z~ DP[Yh!ZlSwEC2rV-5#\#LK喰(~GʼnU:EY@iW6ZScΦD](G'@ 6Hv&XU{9UμFl V%Q)`JRNZy-UHEv۝2 f $32Iͣ5@P B۔T1M{E c9댜=q>ZȎƀ@8Ce6%t1lx9i\,A$t)#/5Nb- _ E2Ki5uF 7SF8&BR11$3= jٰz;ġn;ʨ zݱo͛5CMĨtFwPd CX/ѝxXWS +yt@ߣb:"/%ޝ4UAW Hv^(l8(E+g qB,Ytq0te? Jvtlk-i'=ƽ']X|ew{ 4RxyPv,noS'on7$} .Ie{?W|ԹPO.uO"6e]PHd|Ϻ:y\PnPke¢C-yjŃ >j+Jc0Q]l){E"k£(#H9EPIUDɈRƂ1 l9{.e?t1sw MSjciϲ:|vԺq~ iɲH׼Yk.{^O}~i]?nѮk=wgom=h5<ZM~9ٲ43Ȇ4׷fw=}{5/i]_I_Futs-@h8zT2A cK`SR.J QSN5ϗO|y]2e!A{E>&+ z'I-r, W1jJ@ւ ^@ͼ)]РlւEk> 2l9{Z&fSw1;toJC2Q:]?"WnV֡ϣ|3#sp#KBL LJ+"EoHZ1琥1U zYF<r͵"I ؐUf$PH $QXA%3cx]kF̅& /MKNT|{Muum}}xDD-@PbJoY+uV!Ҷ Yp0IH.9-Q=$~FSd'0Wکxx_;bZཕ.Ys0(yH c:t;F,9~Q+^3{AO$2ZKBD)kBr-jf\?Z{f -vp`E=NC|_d#BW^ 2Nf*NZW)`DtcT"P2EbzNQ[>҇%R=,1$W [wsw9=~Mx0s&wϼy=螐ěG> bvajg^*FxغIGdf2K^͗d4^ ^#u-׈//WOnû7!ڣ*%>bP#&+]X' JReT2U:"~8_}{~Fƽ@ tv>z‚/sfh\e^?mwҊ9`WfիLxf٪VlR~>2>Ч D,b/ mz3iy7u\h7^6:<hLڗZ_`g'Wl:_ #_~,-XhRXMxvωlԋ˳ ͛{«J7{K~jA%R`4a2a:6[Do&2EddaPD tt %D,dczYl[YbƳd$JZ[p55ɐͤLpő5:Yrr-+ZΞRi:6Qmy~|ܘ1^󰶜6L5v٤-tt̆9IAjT9S9ފhtrpk-5ɇmN^tIr\q^B-늼1GP7a*8[cOPL4%f`3>/Y:R U֑D=v2A(^,KmT# ϯiښfz(4h2)&9!G/- LMr4*'-#=WK_9栅`ljcȝ`!D!5¯v_2Q eAt{"I:(j:Ϋɞ~tiIVmNkή.KS|;զO䧿-Ms_@toy*x|nM;Ron$yo7)Mx;{zX?_g͌xyrfE]XzO: ELGߖ_[izw,>d: 7b$Z_hOd.&ut$USڔbG5k vH#b^gdU/'B; tWosgtչooE`QR` $Epn]W"||$"߫imHJv2F|+Y`}4f? 3w ?*EV{~WK3<,}zCo5 '.'?fqw" 4'~_/ EHH* '<# +FVǚP%).Gc^Z-Z? h2P-t&dYkP$ 5N( y Bň nuZεZp&> efIзiOP迚uS|AU:EY@iW>%ScΦD](G'zgD9J7m{Q>s) ZYDd:bSS0Ve"g hRm)F$ɄLCHb '6%UFӞA7fٓφg3OAh@&Y X gQQE :DBd 4. JهZ!ѿ}rFwPm"Hx)(Ёs/D!$C23BfP+-NG=iԓCuɨ zּ˗* +X5>_Fr+$Y6IY3%P9'z3օ_E ڑ@43F& ׾dFA+9jDBfQcK)h!XtJ.%2I_β༷;b:s#wL[AW%escp6W RJu>'^ֻEb}p S2Xaީ ^)ԌXDMȴRH+ن$姘z$ W^F53DM9"դ1IVt\2nrתF3|4OfBD^]4&gq3]\&bqˡf[we;q|8MjPb44l-d:ER~3l et1{όtJ& @8NQq%]&js*qac#T.l8=>+;e#)b?nz7 ?oPp|;~(/pấ4q:9^J)ࢦ3*X{~H{.ޓ*)H%) 3v hZC,<)8ۏGqlMmզFǩ-fmٳv`xm3ų],(eյmFfu9ǼLVRiUgl8D51 HF&G, +I40FGLu[q*#Q3[6cr $4zfQi@(`tQ\gȨUB.7YS6N4 ` Z$S:@-l:eSp3D/NjQr*/yQ !ZA +1 %{0%*F,/q5&rbѐz^|^l 66:‡4>| [ ۑn:K_5owyvbI&~+m&4i#'0M /w|^wFrЕbWW.PqĞi6$[~HIYciIXvX}w{IۡX* ץdC%9\:KXR$CARHS\b>c3{̎erdش.*c1Y)$'d3D>#Ӳf<wkۦkc42: b pHGSjhvXIHㄷ:c4|ҀԄ"%@Mo 76`فa2@ZMGmyŦgJ{fJB}d-*/NMR\)^td>;6xmL I|mՒBw)%3^Ĝ:yhLO0:;GSt|rʃ](OWwϱ61rӌlf&#-)[ZNX@ aR3A#t?Z4&)y^c jR[-Ի_(oFvW#za*J>𥢑vfOoMxWmfW謪0k]\gU^շ;2Ae<֟R u:sY"W.o9 3ma9ާp\ր8F+K-1Ցً`k>\wse46aEVJq||Z+]Xlm84] ZovBQb+WRZq_Ai-L#)*D X&7&Jڨ%2$PXY- ƀ:ً!c)bq]@GzY˪8u5x;옎ڱ$Ir_|R*e82L *h)Y{)uNXRcK/V:& :.r SXΘ@Ud$xamĀ01 bD2<_/-@j!vbO ;.4]418ĉ6].Ѕjy&TZdÖO}S/rg\d,0^2%PX#uNBq{m'o#wHEo=M{2xZ*v٢o{uY*twBDW + J}awXXAњuaݡ ]mR!]i쒺* ]\TW誠֫R!]Ȼ `4WvF]Z JP 5Ev,h J=]G{oZ|`}?ꟃXb(WugtiZh+{x=]L~`?~8g󦑞^;"W +Tv G.RW'V2M? .rih~ʵzحixMCO'zdeVԍ\Ey}/A5#({xX/z0-;(T>Z嬌9UбwlMJ\LR.^#(yb Pb?r-"FĹ'ou7˙Vm͞0#WoYD=*qf+/2ȒRNtbEE˹63+ph(,({QE!mt)v'f;c@]"Day~t"L'O+8 Z'8KOQ|Ct%Ps;CW%΢+B+@%m^OW/BW(]RW;)U3! +vB DOWnXagժ+tUGX=ҕ_t_w,pMg"cmtJkƱK );CW zW{A HW6*pUgvVtE(-#VJu1 \茺*hQ Jzzte2;\ EF7\򝇸đOE*4r|~/kާ~#Yf2/Nژ@|`&Ih-%PB'h kS_=x֦3tUZ&o;]{ztzmWC]gQkp&2,b C_ysdڞ%U}k<'j -7bR媔<r'͑9*C}AKvA[J4Cm 26`ou`YEӃr S\ ,3Yɂf^ҥ$LDTփqk߽, ?LJw"C!hW-w}]/?~:}ǜCVj~gonM}֖iGQ\u#P7=NG]{_WqˍU*}[G<$Fɾ{/*}^VU*}^R%zG `;Q+pug"jmQ>#jx6 pc S/e$fe2&'*/U3S2}qw/z`A2ymgNխI:5e*[Ę@K\F){,D׭on=q` :lN^@Y;93LIlt:H'GfWk]?y c-]}Tž>כb=49P-wǼ;FݢSmS.7tԾ[R^.)Қ Ia* 2Ix2b93i,ROgȱ{5{$|B>#]?^f^m\oKmֵ+|/^I3q._n3lboD\uqVs UR1 { EԵY~W^>S_k>m4gaP^%CR-!B*8D K嘴R;t@DR9?t)`۔G.| {b:sRepVz=]9 }@snn;Nu&wެZ)+O~R|q7w%gOomn7\Eoz7U'Gܳ[/Vo=kb#]\;^hqf]ur/fzA0z_ZWoOs4=pU4RˊZVzm=Gwo|}7Gy0oyutcAQ<]>y4CfjaT 1Ex{)VC2ց\`qi,j8u-:?^߱/cmzxҸfzqk͹CmC!ݖ! *7zC9S2,Hټ'"1(׼.j:H ƞB;6kWdum<}-ywkzO~7x!$ߗ&xq{߳0 <vsx ݍ=I p>_ՍL*e?evJX!j0'ҮMVi>ΜbC|69ܰz+-n$7LYҽƦ/9A!bP!@B.P%P)2Z3t*qG,,|}y3ΐ'7-`,uVgmTvBhH&igk)P]2E,^^W90nAˋ'jffq*vOubdc?C[\7(a(ՄOHiï6lY}޺*=4?k7WyjڰT%kN YL4` J*؞6IIIb_Fھ3xOmI(0buJv~y T_W.?o{(6rSt?W8ͣo<;|}6id6W9JGSN!r,{]&8-Vwަ9C~DO*ɷiu4V)-G}VshaL?כ@?B/G?_WγڤbRKN֪_̫O-X-p'8q|''vRZ^P۽2PBd*n,ܫn7O\:t :bMBYw$!#`Ѹ$;Rknȗӊ~z˞)z6(7<}_ɲ''rr]+_uPqۏfC曪<}MjA]uW` ?Y`+f/~&1*xξ~ퟞDԮKE?JuM%x,ɤ>)Vos]'oNh7_}jCrJb ]v{툯mXejk C9ad[?uuO}:8gbP'<]IѪ 3шqt{j7W hŏ3FO/NΙ:2 2N~ =>:8>b>N4*0]dȬ-Qfa xWnj "m=M%9(omA[KiQ@ Ѩu 4KيϪCw˸0[܃7O?fbo[bYRYVh\=SB 2܌1gS*DRF-Έtf裄3/0gB%m UY[Hd)`J-8ͼEdH덂I$`F&6uBғsA(,kFӞ#)j,gQ<ͱy|{r,0`2:0h!2e$ RgKƻb $}8@KxOo+E~ehkw~H(1*MU`4K6QuH2&dA줃qʷgTYϖglX**hL<!x ؐBȵAq4Q{uߌ'&eNX71C*ddfB.d!NBTHdD>ǜGiQaž_3ywCp%ԎԚ)!;52 mё נ@ CX;iģx\./ O+ytҡNЋ|?^$t oZ@Fɭqp}Ikx_&Μ~ Dqss]4̐(1n{? <`ώ!\v0xe;fy=1{]cE<1űA6ιv0EDXpHHF#<%Pl8۶O(Q2>y=f0)$la6uwOZK6"wS%x}i(8J3(}v=zhs.Nx]hԼk3Q`;taDىMu8@@"RH4$R/4T!.<:#EOV_RJ#+R9ҙm!2R&&&(`UšhEIR$@T$Y'Ti9CM-O葁6[_5πү_ ݽ +AmpzYjc *#E Jؠ>@g:Ä>Ôa0a0a/0*G);f jWDPB{)Sͪ >Aںs@MP kB scٌkl1|>x +|>~"s]+t$5Ypj{it;2inwV" RV2ۑ&ˬf5ő1@.!0= L\$`(A"8'IVED$f<,w/@ٲlK~wU[lޢǮ8XdY:! b—o]̃A']$&t"oPfv;O :dP=7(oEAN}R q~W%B#K?{H_)i4|!Fݻknnv?̠V>"i9*.Ѕ4*gEfFD;dիJU@rEb0:ide9qH@6QR:[0Q mBJtBgIfm*o& ݦkkrJ9fC, փv]G974sgbR]sy3v't ;ÌMya咤yDen"rli|\Mkr~0]Nsl%#`Ι"]>OcY({> &C qMIR%Ice1x!}]W<̱hT̝vlʵU\[\`x4p['QKVeM)~b:ǂn7d*3H ,j-)GRV+)h|>y6J7<~luQ6q3L2))0(-9.\P>IWMR= (g9cƨ@rG$%&r3!h5jn$ڒ (BKu#V⋓zF\d K6勶chz{crz_|A{G`ʥ@XŲ |"OY%R ɒ{|qWagر+n-(3z5isFCi/9 h)jyя i9 8g^tQAGs4fנ!S9ϸ: 1g:ۨ # nܳC1:7z"jAqUZ+e12DTci.IRX歰/ۙs>io4.@iYN2 2MU&:+pp'%l48WK vzWdhB43"U{MϚ`/$^鮷NKIq=TV(CP r!X`h@" T Ɉ!sr,u4vΚ1_ = 2B%R+UHf'DULjVJ<^8$' L a^tU*>8 Uj{%Bl+L*+uEP3LF !4=1y/=}5\%m=ƠL@ǥ9|ee1?6b4&3Ne{ _ {7{mO?n;h}\hߏ}l;(a)D dI 7t7YX~_c-;l-{ kt;H!7$7EGoV?E^,w{{ yc 3nw=}{ƧY]/onպe;p2CzAU[QU`Vct;د+:GėLr?6z{A6ZvE'#vZ:"(b=ٕR/j1U߿{WJ̛=v8X'o~YGziqY~|'>fϧ'd V?LayD?$,ôO"VV fbNZ c`$վG[?℠<;fzvjF<Vz4n^5mĤtssnlEb9V:CEQZ @xlEQBUcu(j,(+tK3mU|sn[1;j>˓|؜a3t8~iq ›4?ֺoɵ,1bjJ;tIwU4@O.]ۻ?|N(6l#)=WNKq6QUu۴ 63vo~~.+J pY_~aG^zeɼ'wDˎv^[dn7N\K荖Ey,qiͣT76ΫBfxvm5uך3a/t*T߳y{;T^$eΆ&qBZ%): fts"bm$7)Op#2,@RdǮ[Suîr+H5[\.G%Al (.y+ӤM~ 5y06x;!ޙJfVԱZ~䁈Ȅ>(Loyn;i~^ӫwz=z-/6_lQ 7T6!,w:ͩi™۬[B{25+㖦c1F>\BM{>g[1Į?^KQ9QG_,m+,79ϴ_0&["*g_ʂ}M c I\iN}Ҝ # \XcfIp >0FWuW^ QfsPJSEsin]TF[ao7> f=cV{R@BR]0r.d#A<;g;~+ذ̬hcgMd6- uTOhM]0xQKV#MnLh)1YMr_>IY ۅ鴔fIlbhDK*O?OH xOpXqiHJa x_CʬZZ:%<ĵ4],cUmG}#8/%pWS Ut[#e['QaS0nA-J. 1'HFp Frwm 7/' _a^2[oa~1{'9*W(u!:ͽ2zAHt>`xʥ`wRQ,ΥG=XRH2H A,xG-h0(bԘS.Fg̻Qc6Lp¢1IIg;1uΚt{n-)Dۻ q\$ Axt2hHQpe|v,GIK%[QK#z̒$0\K@mH0F(fh(|$LVgVz=ד6$K!%VP[Ҽ,Ɇ9['D2H-Ju\JD:!ƱS$+&Y-,K1Tl9j 2j9Dy 1# 2E0`5r5Dꠂs}1G|UE6D__)Қ5,FbhmN2xQ !t\AmƵsS_ZҚ_b+9o o|gҙʔCz~ \^f_߸+{~Eݯ8Bj7;-\- w[sucC\TK 7:f37wN{253ԭs &7Gw\8 (S?" '3vv,kz=1cGM0:˓E2[lP;lRdLh.@%waLe^*a2 LEfsʞj陘!f:|m_J80iD7~.V[OكJ(}wGi-;H*!r}/UH BEiςˣ\k"z4:D8"|zRV!`UVGQFhڦX&8,Hd) 3) 0Ǫ'Q 1*by,MXz hRNI Cnk"VnLPM0ƈ<;g7szt=a(;hnzD]-75鹟6oQ)J 'g;C$'#7KҔp~K% 2`\`xd酞y7Io"Ȁ'^H+&8J!kJEls`^;TAbC,|#wp|g/(,FNPC" Ag:eBccɺbjZa<oiѣ5 b ꛌw _JR / xq.;qcʖ 2^Lӥd`} !ѭׂ3oK:bـj;ae?{^UFJ l :%c B/?_(CXLfmZmJtFT7=YX0oEɥO|e* Q9_6(&Wk|&WJzt>Yӏ0gbZlvMgczWiu,qi>y^ΈOս%ۘRMSڞ)=3ME<s{վgZ.7m~?"a[NoUT nc}f}[~݋v|&YVVS9x1] H\-M$6㋍^e*xdk4DŽ: -0IerNSBDD ]\1?T:Cp{( ):B;ŵ Y*FTڏT(<1;9^+;0Wm\ǡ]xX7RI:tH~aӆ['@?/fgh L"7Cl\C'洶C2g 'TKDm/,-YU%AVB$dĄ 1`. P>K5u |sipuM([cTtkR42W6qOtzAlj2g/>P| K377S6Zvէ]7b{eU83筿[>+Fֿ082Z] ڿR7[D0\gzpdv4;?MH?\giz$iIJ<4)Z h%3Ldm ҋtY~ Ν=ÿZ݇>|GPk"-ҵ4t9?sc* ן^ KeL?Q7UL[{Z]uEw׳Qwv=rGL7 xnNgwNghi{{>GUe4\3qjTh|xwM,3k\t#u>VX-S/ uLF-ѝ]^ ]\U9rʷy{hٍPsǾ$<WPXս tH)y)vT{mH \YMx*&p6Y(s!UtCNxc~r|ȃd/f.NHKu 0Qs 2'] Lx ( ^P*yy*4"dرM ױUi$Y)94' 2i +Vg =x'YZ>Y Qch1a9MtJRl^@$2UpHIUS3vĐH,JeiՔDBpP*db[21Zs,ʺRM=_ $e-fZ+ J-he/rG֪g+*4(-@xPE/v%c|mwB3XLf43!1a pn tf8~ ɸ-Pݯw ^Lu)Ҫڽ JiDgbFTJ|ucݵ_gw4鏫}o$EBKhJ0dkX\ oaGZOv,t6G 4~ obrPmN_3g4ڡ 6[R@K & J.Giyi!_'Y9ٜ0c"!)!DBDiKh师1cW՝8H8Lat8,1Y%觀rt%cP!;Fnc+G RR mC!ܑN*Wt_m6/mq/DbӸ[6HZ2 ߧ5mxTU 40KdRhp=)혨~ SǧEy \&Y.ǺFbT eM`sx<^>WQzAeZ9Ygffm4X!-auĺ<^jͅ"Zc>+nʄ+:8r Y Z#mO7QxgƗ%oO{gB|0g>q< Ұ|ap1Ѐ(y㽳 2 Α ;:(e A'QQBҦKr!g9x;"Z' `,Y6G2?O<]\{5oKr**SY1#:@g3QYR&ǝ'T4`0m˒ZH:[-hSdDdDY0X%CSofBrA0[LS",e 8Là>0BB2gk9LNQS, ASCFbuMi6Ku6%Eհ^^lt5ޤwV (Jj8GZ);NՋЋKч8}x6Cx*&<#wFk.aH{7kX[ 罡&~'PY()O~PQ$UC0˽bBcϽtZ*Rӈ;nNzLs>:lX5 x*wm92xo8<23PE9!˝R 3-rB -)J-2ǹU@ETC'WKaKaq3hj$5lq[^c!?j\tMMX%rPlVgiv 4m4moS fJ U-6sUҦG_=_w`(-u|7=$f8˦| ӑP sٌeՃvl]j(maa@x&bD볙TKz_/?~ R!rFAg `hn)1H)h[SnƊ_AiyP#d=5JAK b"S,:D˒A|WXH>3gh ~KpWMD6U@$kWc,8[/Ol>,̔tg__?˥)$JYyHp,728FÏ[#S4YTѴۈ;iW&O*8Cv0iKW^b<5m߼RBXœ}@s>ki6n~]m""og6bE */:EPq6KV>zWVszW3Kz %qyå xL8P=d:FY^1Tk%iՌށ5 |3穵B@f/]޶En5^+t`ԛI7%ҳtthТgn؟&Z3y ńε&L iju=l1lv Ogf9!ang3vVyav~[^ϙy߫7ڒx֓pEV Xs ˗Ǽ$tm׆_Z]znm(B 1)-qO=SF=uzq:>Eb2}|[ڃ#91qS&!*yDy,Wڤ6hd0bS(FcD2 DDꥦI*[H0<)c" G&#gïOtI! v\b*,m)GVw 5+Ӊ_k6W_ll ݜym7bdy9J˽>^±k6$[(ȝd',!(" у`(XA` RMtm$u`m5z KK#<6)J/f ADi4X JScA,C c,QQrI*fp<٨zZ4O𩄇5Y x<{B%12pާlVJ`P^seey<\7F)Wo ri@ !,{.R)a U+Rߘ$Bebk4a8$chQEM{v=)g"PIAϧ 5콫{:jp5)us؇(@ pe;c4Y"c >=uk9#/RȕLSefuqnBkOݔ)MLq~1Ho%Ku _{C7Ho OeSjdX6tח7L$«nK$'w|b?$J(x٥*xxu{{sQ^uqFwk7ס7A׽l}k^$P>b6dJ}苑qaey>i^q^߼)]nyJiʪ1S Sj$e{ieninLbaZLm[A./eE[\%8 S(㊱)soVrX!s\S[w Mb5REεVcKu|VK 3셑_q>xA۲L|0h-St{fj7nl_pșW:eѸk=$ՙ, WS`?{>M,Ӟ95)fC Ns"(A96 X"~WQ[dÆh#mA: :OB o٠Ɔ6(LP0|蛆^A5P< z6!Sm)A.xڎa.\L"MIZ!@4k^zy+R7~fR15_z*Izz׼iwj134 qx qM śϙ/<'_^N.YY>3|OmG?0U+kIթ">hRuFa\P- _џR`']=Xm>3rrz0 ,Ij<DiL%Á2G1|?`jɢpIr= R1E_a J`h boݥA)l$RK~z]фM٧!PvŚ L?QNߔwRqAp>^_6"v$Y ̀{)!LbLQ{*&p_kJ`40`M@ZX.*%W`#{sJIXƏCALe##S90me;\<U3x/_&#Z%ookJ:u+6u]Y`>,poF8fF~ G f: Ru=_$$~YejOd࠘mJDd͝7nG`B&Jʀ$R`h1Ԥ0R9(8^sZs/M?.5pXA fcFkoyVe%:NoQR/6Hđp0 8%}`0XK(B4o`fOoLRQ2Ked` dA$&چE"9?@K7@8VwP6R&vRX < kA Y䟁G!V3ώG4{i.ɠ@)l3/ذUPRAw2ȨT5$p j٘F6Cui:R E%ŨX:p(d#hLKAK!E5̈m̢6=5":j|7U#OVUAڄ'~@-)TH6S&+HTm4P*9=nXON֢;w=o=C|Jןr(@D-;Y$&Tl%MMH_tfogC?\#5=*@ԡ :.1dn=?|:KybyB'4<`o5Jf@kktgXJ{LJ0T̪i9Fe.2/据Kx (f֤֗ NG2KJRGI<% kgJAdTio>.Bi7b^{^7=ÿ1aH%E`-pr(*YB98K0 ~Gu>_ Ggw G#XSNQm,vYڧ*7p7=- +)>X|GZ"M dx7Zh_d{;+Q?b~_ꡬ_=:=n21x<-^:`A' $3~${''PNm/zVFW>5I|X,pR(,jP~qPh"P X5M+%(CdrEkz_P $s`RifX~R~*—Rlf񟅚@C'JVi,5s7Sd9z2?7o]@)T? ߊ ְwr£ߪ,?\/xOϳZ$>J/拾-%)GsnAϛw %7&Q'?g(v&ذvn`?ϘLJt::9gg"/_m<MnkG{7 o48>9ݘz:;<,=0qoHE%Y-ݬŕj̤JM삶ԡNzcA|1#3S=v=ՓՓAԓa]@R6d{JFR:p{k5١QIkZ *o=KCbK!󰾭 >FFJX*CSrvM)0?v8n?Kb)}A-6t#%-#xН~T΀90*(LJpRxm Fg kT:&!,0 vݽS9-mɤZu:;XeiO>+,yRL{t;Cqܶkk@02ǘ7Ꜷ/|57!7ʏgrd5&gjJ ki%.Ueo "# j8[R*DKEIГcmcSlGr`9iTRc(ZGf<af56n]  o,vv^-}:y\<|'t=pĆN:t¡3AP>FxE]2 $pPmS6TEpM( ~v3zSb."_y~]+st2Mks0ѭFǮQ[7z#67( IrFΞBJ"FPXkzEH[S$"tNm66DrZbN"訲X2E&1f[TgZ 73xؓUˍ]#iLjU\AdQh&GceBI$I!M|VF!U2I!EY]N*6 J3(:&iUvuW*X(b붽y<"^~[:אrFɮq5vc\ZW<чA&x惩Ԧ(ɀuZMT_O;A )RrLC[$WVE)J9%[fH84 06V6˱.5| &Uӂ[Ac:ҁVknӎ0.Yi T3b|pCBG5:0n~|) 7daK!-F 3)#3f$F "ږdI!(礌)9/|2Ȣ([-fT3k9y G72 EFkW2~~᪏~nبT%|8JZN#:zt%"< Yj#ћEZkaJ,w(a@0v:p{k==\U+G2\Y/j9t՛{0ZpZGt]~Z= H~<99O?,V޿ G}z|/*dʛߞa6PW;GdNn1Lyi?yo$𽼜L󦆧[/WWoo6[&7>RK j(s~?rj,s Q]P79|(UvdS*Ԏ3¦jZf,8ͮځ1t?R%/dHzkyڎ J{겫e'ۢ$ۢ!bQrxLbAYɂSB2*@_T1m['jHfIItٛ" [u( ^ɨzf+sֶCZbݖ>V֮ ITZyiJ%\Q:ŞI^-uj@̸: ^Ŵ[r$;=Y2V-/rd ["RKNu1U87YܼO:E585܉GKwx7ɖUK'msDŽo_cvVMl_xۓ9 L.uHQVgTov/\%6vչDg|6 Zr,W\D Dݒ^2 4|F1MUwFcNQA (C|C`9^`+GZT=8xns~ck2ZHr|siw.IM#0,%_]O+N0yU:`SA5c@'ޫ }oƼ&䇃e#yej* `QZ-T/!2I95 b_P D$s.EdPTvjٮzH18omZ9|ϟGyez0mvUl4To3LX*3Nwmxk /油*U?= |__Yw(te, 8]nY$6x/GkuZocPY±?`{6h` 2lI*S5СVܞl'IhCýmBDN,kc0:kN:ꘄ*I"7xwdrZcjVۂ=0˜uq>f28\_yFxO{TœceG ^Kd-+8ldۆD/u!G4 05+ 5%+9Hk@pi"g?g9FӲgݵȶvMs$7a3J;B4aU7?)t7_"Wˌ0|w]bzrO:qQA"ZEXHS@5H2Bj: eRJV9ZѶc9rQ-ܗ!Sb ٲUEmP uE`M%eѠ٧=<ח|t+_^(#=BX(AFGN#SVHEB[A(&HcֲJ9(cFb! ji5t"kȵ"g, /_wBp:~=Z%Z-z,L?vŵ-3Uhd55e^?ݣ# 19SVj4ZZ;t%A+Tܗ# j[8[R*DKbɱݵxRLGr`9i H1،NWi8Xz,+QTmjkJ~/i[OO?Ѕ`t3:|#JI'.Q8t&HJ$ѳ H ɲ|f#)h ?j셤:TaXK2Hklm;|Q3zSb.">>y}Dj+r#6i<n/Emuڽ7( IrFžBJ"FXk'6Z#)dsNξ)mlDY3VtlkRѨ"1fZ٨6>nFv<OK4FD#b8|", ͆iډJ"<@zģ2bh%Xd8kY$0aTDa6H դȠB"g;"|̼ĸ8uJf\R\tqߘpTw  >Ěp'yGn19Aޗ!~" !`=XOeLRsBK̐phXraBi4z4W+RV*UJ"i:y]vUpW<܋w1h)ҩTM !, 8@iy6tPҖY*%UC # G:uDJ˒HfT%t$%TdQ:i 6hJIJ- t`9(#f X/_P~g: uƵX6M"h(2Y!Ji3)r[rN9[B`s V:-*_,W3&߇S1( KTqt5UQ+kEug1B[ZfE!k?u2\mUgP<; r liGki}9mђXc̈Q莲 t U^6Pl{ȃA [ aJʠr:dTJeqPl#3[;SIYhb%IRr^,d([T*z Eř5I6y G [{ǽjo*Jgxl>p숵&oθRW{hln>#5ٺ9q#φiG*\3)HoOœڠAĈt|=h$d5(uA[]tN+IP69RY%iX7rӋ=C`D;~/P6nJu$JS,ʇgB T\4cvq BY~N*oYRtv^ A sh ]w"VC&6 \DIzbdĂln2Lq(jY<$GP% ufl71&Go_y;:pF߆3%?dC&pyG!j@ KGhi| 6Gt& &gHFu%Dx/f]SfguȖ 3~>to{XQ[JVgkԹE:qm+o3E`#p1y&ZddWt@o:RNJ q,#}|OWzm=v#/~ gv Cu3NM4/JoZ_bQt_ gkO[=$h>#~F}x'-l˫knɈ{Qm;vۀ||kEG3ni*uAgљ?w 8~Q5oW窪Դ2}+|=:'EvzCZy眎_h!>O׹ "j'O^+PI[+a[FQ[JNCz)޼ /cQ 9M%ed.xteAvY"?(|G|WUz w_֐YwaV;HnW쏿_qOZAh,ar$3', W.eFv/ ,u҅NhSّIJA-R+˖s_]td3h"bV"& ƖIX 2BVؘ@v> slPDpE& RQU0Q=31lS:%E66NqOWXvsiys{Lg֘<JfYPT0!$WYG B0~`(:Z,h|R5ѹ6UR̺q"rI,bJ&'M*c )Q#zW^]o<:ݚؠX`uYwHqkGG&S/d'qʠm`&,EQcNƨ ΓPL2k)fR9>T-G š*Cۈ!ewE \Nd#}W^ z:_Xĝu/ @d1() dr__%A X)|E00.o:Yr҈D`=wmI_!9xĮ~nfe-0)HCV_!(r$6 X8ÞꞪ_U7R,x)qΈU&h4hΫ"F.p:t R\ =K^WHqia!ߚſO[ ]LowB_GG?smJOG:IH|t4qMuL{y@ :)eb؂34MbKt/]H}&?v~tJHh p.z~z?@YqqOMt*8ٰ^z$jl}\Y*4ۍVnҘ]T4M9LJl+J}2646'p}0,Pd޶?ԱwQ@qf7=ȕVfM,j\߂چbDYRS(ِr^)z?hp< jqpvHnbWx&$>eaT4?0Bg&5Gq3tIk{3f/6,V.[O{3P5lf e3u`o̩ ded4g" zA{J49B7E"(A+YRcM(E'AYbTuRt&Gt7No}T߀y?}k'=3QcШ8Pmc\28f%Qu2tQVqz=Sm,MTqgqmLv%\q]`)Ή@t ^W}y+R7z:2Xlu,N !y˩,ܕU18+^/ͷkS[͇3[g~ULFր/8.mug&(=e U,}{ykg[4A^8l:jmMY Eۢ%'"!V$FQA >q1> uSc&԰5,~<}u,09]bzv[=?VGR|Dw6xQw0;X%w}t#МaKY!4E} =-6Npe$,GF*jh"qgg,wnF?'ΐejn1VhC! d9^v7xcrz$cx80e{m$Z#yH''JǵjĴ-.d߿oY}qtn2~.9zl뭢~ߖYA.[2x w& FȒ˻?5ez.5>[bN~eȲ$~0D9 /͝Ppv\%͟3,q]jd`o ]$4 q4gႠFZHш KhIfs B\'`eg h͢)kMD[-$@'bp2'mkp΃OqI"u܉@:@01 dV#+΁klc`+^ҋ6\}^9fh+NhR6t~ '&~ r~sb߮.EF۽f=;AD ?ُZYKjL )R]Uێq̣KYȟ5 '{-/քxm |ony,f;h~;|mY&>lZ=`fW#in`=V=VG+V̨EEklNx2A<) ?Nǫp:!)td"Iq[*TZUf"ȵWOOا'YXJ9EBbLvhQ$*I, D *+@-CJS^joer2W E !PNpǭT 8k+r>=$I7H_ )\r"bb*74MIR\i%eB┩DB֩0!ܴ6,K1XE8u2)AQr;aU%iB<0Ǩ9 dK"^HN7VT2qFOmcKPEJ d~PꨔA=MBsD:|Z"#ְ3r;d96`\@3 %t6+<(2\)&C'At)7roz3F67E7#^ R ʙADKOzd:Z~:8IG;iP2BEM1Qj4bÔaEwˍ1,xxcBnUQqt8l3J7Ce6]NtAnd>y " DN)JO[/t;u="H[ 5EW"|Ȯ:yKkŘT@g)0TkDed4܀LPRGN<`TCvޱto z]Mx^[@xKY2@"Gq)6Ѥg&IGc}CoOpZE Г+ }{1j^hEQdU9Nƃypo^Rqm $9Q8>m,}cm4Wk#7'&(޹2HKH(ZS+܊ݙ_T`'lb;3Khؙ;3-vR&_B Q_RUCWWʥh~:]43ry (vR5'9|1*l*@~,ŶD$K_:`b,‘fI=8ǝ3ٶח0Yd%laeXV7?6?~l7vjW_{XPq)B ޯrK9+ o}嬪Pb@ ޾I I?,^OluῬה\T]ju[IF^{\|['@gW8 JJ}c.WPN2$)%:hV(%`D5J[,Auѐh]rɓ9*!)dQR.4iv,7ss͊wV?(vuK!.:T"rJcF$tI.Q13 x)vOy" -Ӵ̛Z3ujRN9Ow?.57?fJ>@メ%,FT#.喸!,IŸNz9B} {)H^Dnjzu@;%\$W_ZRAJB )'u @#I1$˪$2֠xp`P9DS*K]w 쌜!Ѫ:$Hc)=8iwH3|È t}°/$raOӧ.ʕhUa'nG<v(jbP(bԞ()>(j8kWFv@YFCb.J!D\*!sD@ x2*{)V^oяç/gL*~#Dn|mb~& RkW_S6` 'ƫѰ^q3~\jxLɓ^s]$*r?`Ny'; ~:-©ĉ Abk%y̱KQ,*!iI)ÔS0gsH<.DXPq*iDA"="`~vFΖ=W&r<} ||O7-]Os{Cݼ^B[T/4}`F eڑ"cd.hH*iǃRڨ(*<i4ae4 q62nu4%NтR /EJt9KZd4Mk_|}#{$~MQ[G3YGT- xS4vPalR㧛<:ŝ9wە.uLHƥhP>AIܠ< ~@0892( i"1([O0$IxHh+\!+H%wmmK ɞ+2 %ZI@H1 N9ť#dF/cLʍٍJ6,̦bavXX(=21㖬r/SWly A:{MiV,Yydcz"_")qJHl")#(%0l\aٽ+8Zaët.M =SXÆ_Dxb#՞ +#PAD. J&#0Kee/*/gOx3-쎮Yxe~[ U} Qrlu𿋉m>PBZWϟZۻNwth}vзGZвۆVƻ;sHJ (\ -y߫Țy֝踮{cq•r`p7M=ز 2szܡi9s{WzďO3 }FR= I\-KGVr=fxd j W)UЁPntd)) `~hrRy\FJY^`Q- Na4 ^sΰ28UXE.DdJKO4$\1+?5w[NHF֛`Xa4XNP=׊+͛#͊[_ON#SAr up2xn׹UڒIU U IQI?@_iq>A`e4# RQŻB R &.~5แ &_ }=f{ 85/?Og%n?J3~5ol\z]#0kD2u3kMǜKK&[cIk5>&`0|/,2='p J# уa(X` ѳ,Gφ6#}5ҫ)0ywǦkN-v>Kv:kI*+ |J 0$pVJ`P^seeD~7wN%^$AHTiȹ>?E.^v~iObi*7Bzq:;u+| h6=ap/dz>ާ)te1C-kRtl\1T_/oo G^u]ʖ.5uY WEwo~[Y`|nxGYߛ3}0/41uaPbR ުܦrX`K[\K;V$m!5UfK"H#o#pl̂r19t~dc9<HrUb[F9k$+;~>֥Ѹ<_مl>`m h) }B8^D//;GxЇp4LRkECd/>f#7;fggn _OCRjc쇁..õxMPNvN F}1Bur!ǿ_o'Hq s(T"V_}֦RVW R ]6@5mORsO>,\)~_e7cZw|oWaF220SXzqqu'n^Z>HuN&C9cL,Π?G [ӹF*|3;Gŝu'ʚn""vH"jN,y}Y$"%:*z FGn `'Ktu1X<&(A3ͳg:=PC3EHd3 C+)橶 c )ikcR` FjAFE;zI]0fӍU[-]ռub\eEpBɼg,(e$t˞(D\I1H&,)]mo9r+#_l,&HM.wHY$Xb+3mmp=/hıZmuS|XbB'?;4 8fx^1:ܢQ -R#ҡֹ.){ d.j#Ț/>i(cFDh2TJ;F|BEb9 U_IVɍ"AI^,4͆M4/u8#9O)m$ׇ+|%vꭇ>voy-^9^0TViDI z-Ǻ+ڪ4aAY,˖㩘/~z δ^%\g&d|%Ux8i1b`|=" |zE05돽AϳU}an7x~g!)[PEl$26iy)}qHL?/gqo55m@V34I;)-:0$(M1yLQ\ё:!? T&Z!yUp2NXL ) |qFatk?';;DY| Q/?0g6kn?9-j,W[$״W*V__\/kWûɿ&]>,|O})Ply?$c+4_],s/^u6++ԛ/:}/55e%.$8^dZpWo/'X?=36giZi̭|7Dc;2B 4yVUeo??}9Y+ ɳ-sZKN ކo-lq[ܓ$Z_dNtr/&99wO mV/j)a~yu羇k$b}&մOx@ag4~iunul3"Qgº|p5k~)<[Vn2<$vyZE觊{|ɞZp֌@,34c\~O׹j~Oߖu0)Jx.dk#yaFG.ڛ@^7-9̪ۣܰ}17SUnݭpo{ģ9EL{@ YWQod(WJBPk5oMh!R"pA`XlD:'23Nj.B]vUfIΘ}n}M(r8PMW=\b<w`*J> W,d;L B.i0RE*5օR%SA0=d"sػ7~)i'2P$JhYl&uٶ55 ؛xlZ KBeah\(+jg͆ /k)a@܊zj>1p4 Z,kfŲ&gTwBQr }1*h@'6Bl Fk Q,C" BdGokUE-V/z1kf":j|j% E|wedt2?>lj5-)Gh0he!z6=&Aͪ߯Ol(= a<؄d{Քojޯ;g/yIN%E+JA؎lb歊)7]wL5ekA<s&"=T;e{MdsA*|&ͮ~1|zar5N&ߕ&wӿ>'Ɩy>/}XK1\DiQoG) ) eF$L."VhXGv@ȟ孤W=X ^UZujw1 Y/C}F/rd kC%: xpnGJ«'W7lllUKe/3 ̘m/>honfܷv3;/%I?Gkl+X^!6om9?On?Rg~z_TOKSr<:&[6_xղOW⿯jZ?iFupΎGUj 种֘v 6Ju/ъGV29~aoNw+p?a="?fi|>]}6m75c0) hN&?,6L'?YqztEN.;Wb|OӪղ ƲXu}Gqr}~5 ' ë>yj=K9 Sp:-rSћN:@: $;Q_bE=n$ 1r'-@>گ$"yR>" KನY3R e6(ʒV'rMW !rEkz_L RG)"x\,_i^l8S˻QhHKzK;o|{73\޿-yxqyP3rݝ{v$82gbjZ'K%Y}6v?}_:e-u(: zΑd0l=ӓd0=$esLVzOOhJ#2"]y/rUis$GrnE!@qۢl?~` V!~+d >FL8]!~)fD5v竝+_>MR[o9Yņ}EbGwp:.Tj0SDN Tꘄgm+MOizi9 >b~i5_vlJ<=ۛcI.?\=q,~9Ƥ':Ʉ9+JN0Y`kу.š055ҕZ)u4*S(Z k6=^eVKks{+庶pQ#|&f*L;VQ]HKB]vK]ˢ3Y "OWsXajFBy$B)N(jbW$;i1U Ȫh(*Jj^CSJ9ZD 63_鄣Q?'BQ2EmUXQ1R&S;4͆'ML{rST JB1R.`t)`)".  m5^ o {@1~rbFb'Ri!ZbcGm6=jM$y^k*N7+N6 t[(-cE[k@ug ڴ!kΨNmjL<-fN7%"Z Mtb%j7CnG7لbSHI(U1Zi"^Z ҹm7ࢍ dYtTrIYD#3E&c P*ʤ:zMp=_ܨq(}Mhƈ8Fĝ}$, 4h6RPCR~S*!*@cq֨ˁR %ʮ  0Jغlo<?[9אrͬиE;1.Oi `*@2:-&߱'K`P)9NaLjCͬc(6||^\~|0"_ k#=Dpԏh__fYʣo[Vb{rY!h;#,thSkO>]yL | nAe$}1![yR "H7)|LIS޽C}GG ՇTfR/YjDm|x%fQ,=%,,6Yz~/7?sFz.@}'I /)6gF\aLvXk+!-rOЫ;{赛3{Fh470|%P%w ;BeUEӥFe,:u>WbЂS78Ŝ9s˭#ᩲ2N+\җkV*8ʝ7\LB*j|sWL#,8ZI}kk=@jnޓ69NzY?{Ӏf7OEm];`TWRhjv&2E1as-SBx8)mt:Zx0zrUžbG$\"TXj1a HH 4+]"UJV_A?K[? ˖-`&Wa]‡n. ɝ'*u+In ꕚ Ť5ffa9=@svvjaf3ݵ &^EymnzM&;գ ;B5_` ބ~µQeIS2W%adBO-bA <,vau /x38^e$kL-X829")i#`$fp:Kjӓ3NfjY ;̱<_ȱ@1FȘC xdVIț(%Sd&a%,V=:Q1#!R&R/5eDDL ` Xy$RD[L΁5.&M\i.?Xڝ!2z.y 4QV}/`>p7g^ H)sYG(a9F1^{̹kU8v8!rEƧ$x'(!r'YdB{AR­2Ģ(DKl`}1H.ѵDױ%sK4ywy'x2HP%weIb ADi4X JScA,K-$1 \(P`(ԝiGLfK߉FIMJxXCΤ*+ >Đ(RK'++;ӊIqD{῝c8{e$P. pUX_1R3{.R)a U-t\ןd `Mբg?P2G)Em[vC:p$[җ≁IA@w);Pz4"Vi5H`aTQkoJ&$!J,7Ȁ4lYGPd"#u0ڇWE-_m8gx-.d"0E٥Q5:~ 4|ĽR>Tu*SfzѤ"Ϳ7B_GpGpnS!RE]QO xb2qMEL5Hb cyτB`8xG!aS\m.,&`=7^ٽdM8^ ̟pz4 ǰa6? 0rR^?0O 7UYpy#\vTj$iSA@V`&ؘ>}XLУiیAnʊa_C6@%aRᴃI|wormV>aBjB2{K[Uǎt_|xcyusLQ)Xj,ƋƋfi9oʹm|V.5ppv1NXx>a8|dì?S,l~5@Jt6YLdcX])\DԀE/ap(F=a`Ei3fg 9.~~0[?~*|Jvڍw~ 6Dvqs3ٻw@8zt<3*pI9X.~oofInxuIm C?ݷ1ι(+*,Ŋ@5;r3-Zq=t[U_ôn)| vYU$;Jh~=qb_ݎ+΀Q}E׀ox:-+k?m+,(\)h?!~Tx>UR0nLE}(P,.}DR'.P @MoR;߄&p=8C}3T&8]I>4j;fQמ-1MkΆ+EpkxdD:;aJ< w Ó;+5/ɤJbM^ԑ;0hiףMHOuүw0(wɳ*ϵi6D6pl(/ QȢQv1X<&ݑ(A;3mt~b m)B(?9~p<\j6<ՖW E$":ǂZg0QN"8aq3}kkmlxm^[v zZ|*E'/2 Ta(+ :m(F{lyE1٠a{.g{?sn8kJ(n.q̒KF+N#jc ?ʥh d4ꨍ 湠[&BjA{kU1E:h|G X8FҎYLA@YPpm[TL[C4=!ӧ+skN尥'73TI>&Gϲ 6%ȃ<`JʁjbTZy4'T-~ӘJ1*H+%{xbxÅ>þmק5kļ#D#Hd 3â #%1!՛$MÓ@X C4 ^Y!TFF6JcXb[N= iA261t*?_jk:(m!@  ZY N2E֡ k;B0r1R564Ei)DP; *gQxcuLx8hL^qp~eͳYK+g\g?ϓg8'Xx"c>ɳrDA`%%k`gwŤlXϳU. E /Qgf0o33U+./5[IF8dz ҽը$/^mbKN W2uq^!\O+ r/!ڛW1撜g7hLޗAXdۢ;2(0NQԱYҤq`MzՑ,`㒾`۰k~a7$ЌfOi^[HmiŠ'Xπ{7RL]ڎzls׽0y5V N !VtAzE r]oGW`=R?_íI~~\ˤBR~b!)ٔFD5~UUU;F(|Xl@5g9<}nYǗ-F)Mwk&` tuh3j4P"gMYrwN_u. {PhQdsuqaFG6 AR d^0$W{0cURXW5Nh͉gL" =HV*E4":F߈P.gKK*Xa9Y BohvF{GרF%`OPxkLp"'sn -5y=_qI"ssdbK:-L[!v9=B0Xt&KC0"hBZD[+T AfF44= 6`@&s2BS$$d-UED'e,)V9M+(slU(l#5ILNkѸ!0D  N+{[Εp.>9#e<{_ {^wp 9ar Y6ow:|`弤V2-MJk-dñNN16,O1XEtOI [Ls{XIݫiBq&qNĨczH@/{7"([QhΖ6+=1 G!IV0LG {D:ENXw:YNn'5r97wyHg*PsF= A$VBdH$hsP*O/w`X3#\ WEGR!J%ppC)&Eu`<!8@bPO~OZ962j9fΘ(5z^aJp# f^Pi1!s`8^},+&OczlH(t$K*K(zANSm+x"}4z+&"*/| HIL5,YF98.$j2r\.& )ƃzSkoZm kJ$2dNH8Ml"5 88[?q4\Ph/rvFjr蓯P@p֙WWˠ[5qjo^_&?u.sɠQOq^&ܟE׽ogXKޭDmڠTw]ӵ=VwXWCQ*1 %b&M7_5~vޛ)k2Mz6LoF(i Nr珃i}IYY! 7^&\3LjSoZQ9ji=5wù*/Img]}Lb;I\^muZ]bx1S=]2{9/?TmiMLu۠ۗ!MlFxP& 2'/o_:iBff|rx")hoܔi+BQQ˝˷+ `S+ pt@GR0NՕzD+ 3;9Mޗg=Yek:h^(FpBρ,./B?I8!I&EIK'МA-w\ro\D0DxaYkLH9څhp` Y:頋a,fpZͪT8oSQ"b6?y΅w]ϽzB l+xR ;qW3c\;yt^l2%z%8p*E `<$4'nA28%fKŹgS̶6ڃ 0.gT6ƷqG$(H9wYRl zMsn\:ڜwqZA]ja` QPOַ*)i&,|>J&(AsubIP{0^ͫ49"NK [L\f 70iCo Nq4eV{trь);'2PCU^^w78'7xƎ75Զֹ6h궆=R*%sY8TSie4bp1A KH!p͖`}NokiHp y~>wKK6ZYS'[Tǭp.('CrZNLͷǫ-/qP9.Ĕ6 yZF a녖)Bk]/;Dc{kTGd5%Y=M5nVpod F(l.BW *}f]Sz3PX1C2fu,*Z+`oT6ٝ8䵏4L^l sL1z)BJL(oI`(},3.-5ʬ0fwqK;{yC70[yN+" bzDdDGK< i\2K.!3%cW<"sFј]Nʽws{)\=s%;.<&}K t߬*r{3|wZZn5v38Kg g)=%1B(`b K@(iĽap/ceomDPbKdk濲00n{A$ L.Ea6grBqסyQM> _8̈3~`p(|—vZNEజ,jXNnݓ5ڎ:z'DDv\vbl/B2;S9L[oݩ;w;v#2WR=sc1W\a/as +v?s+ʽ^ J9+Ņѻݤx/6>Y޿,̊~2j>\TMwӷya5TH_;um‘ܻgq|B\ilR=9͠ѿOqZR<,9¹eH$DR.^N ^۟0e$;\ƞvt5,ZְRNu4:~N)G8k=JdR͒2Q׿X'Cto;C.Ml/zc_Myi#IcmQ53 )m'g+U!=6B %i.ft;Zm(2BS$$@FD<"1)2YT+ΓTcCb{_vfMvG adZ%CPAlk2Mq.ɢYrHj%ĸB6 CD<`J<ɤ-W%}5yܓhBq&qNh.% AE潌Y 83>QWT,AmV {b:v1R 1C `J/(gIh@4cQ.rb /gźQPAK]T;㼀@P3^`9fz B+JQl2$y9@I';Pw,2jmN+#^M` YXցzdA-?=;I?iO&;.OxB-g#TFK#6LInyd!,c4 *7&TfcmNi}'K:f6ڬOyIE@q@%48"X/`wmODoXPГ(_1VxafRj`24w QEu98!QN=`}юbv@axz1g'n0ExKRnY ס}.][`Üp(ΰ8cq;M˱&DLPv:+}Ֆ۶uڦحm&a(A +r1 "ȝ "tJT@mBs+^\j޺(W7{hu]}U}NXy}la5vt\UWInl>[vm^? GkWWֺ.qw+X\k}["r\ƔM"qn^.Z.Dz ˕:e#?@zK ԟϦP]lӁ{['%u xjkZ4zppO9? w#+3K.Z`ˤmA%ƒ#kmJ__] ! ؽ-n[a,i%9w3)K$+2e58ɣ!937s桝pL0X%ٴ) )E/(18-"QIVg J0@efkny<{|}(瓛񨿩jK&7 _f[M~;LH#p3AX͉ beUxGY5F#l(Kg;BۧYߙ ?!#[;=PFWNt1xOF0ui=f|u3/b|=`!˿͘*plG0 z‚5o~ZSy3?NO#ހt˖[g?m.5[jKٖmCN-];KZ,uϫvKP L$DbJy!D4.Pt^/"F-'e0/u0 dL&Q)ML se傩LVS,9#Q2-{&Z#gGĻP'7բtL.=ci70~ϚqQ]iSzva/ĸ-;hK-lG!'{.B">0D$iI Nͮ<|燎N! uߟ"tWrSg&~||#+JK8YlS*IH$0xO%u&mxQ<2vg4M$og\34>ԐVX*kk:(eNy0@9F>SO9?`K@ F`<|J$z%!zB!䕎uY'IRE#z~$B_ * 6"PԶe؄NdKx %\Z_bwGW<ߩ/W$V3vNbpLxB~F|@8z|7ozQ4qݸwƿVJcťOyܣʞ0 /D9*b~49%q<4 GCfU Ih6 ^4: c8<F HyA.Nù 0O (}:+wa 7D_6W4c'50ϩ2MRu?_s5-S hiYJ%yT ?óo'<6\ cn6S?7߆+LDH)U*DaXS9cf͈b)@ͅKTKЄ}ע;uylT:^YCFP oUX DꖣcWE8Mfx˯26U(tx-_Tpyt~l <柳Fa:i(|4TXihu?vNU[]Y`Trӝoxq~2P˗c+-djlIHX_cMPw e#>ώm4gӄL^D]')`ut90ӆƓZ&iRK6)Vr`QZo7g[.wvqgC}P\I]FeT+A]dTs|Cj&va4ͧK _X__ y]5x>>]&7"}p{ ?\fQ+am\F^Зfu"(ëvZWY$J7O"Ur d[8@ O҄pnCW7/pyX^5{8w[J7zCշ_73o՜-';p:W=@oI5[dP{[TIX܌0}R#xs>Y@>pR:I3NF5M{P`4&l=HEVgSL$TD/Tʘfa\0P5u)QH  3ceR11i5`qђy6ôEΎ mzLm(=8ZVftװ|Yki559)2d[(჊Q`)ׂ&L^k9XK&βD4,o3W10{W9mAgmp"MHX[nBhdq_*C^a za݅%.rK=n^1J'% dœMZlJrc'a0tXKN!&.xY&1Zmt- U@RP!hE 3IY/i'H >{\u!prZaq4@(Krpq@:BQ q^r{wc"Y,PqB5*"\Et'>>+~?>\NCIzyEVjΖ܌g]~>oP ޕ@_jTTW}Ua,ޥ!3Ϸ* •yN~KXwk\3AF~ΒQǛҞ?.M{ fF)_qaCAE_ $. T@ӱ O=xг2o|U@)!)oMO7@M8 wbů=+mth5oDe܈r=%儝ɅVSNtd:e$ lYk~E;r`!ɻDRMZKarC5+Ui8n"9z쪥Q_,m`è^X {껪`NU5!aLx]VX;YMҿf?/ Ξ<<g"@OכbFeW} 3at+[/BshNp4g3CKgMc<[zx(caR |ʏjI %ܤ+E̤4%>PXV!.b'vhi #X𜐬l:^F#QFs=C$,A!j9$/+X=xzS=u ,f*9Y[09%:g]`kQW|Q@_Pr{.j|yv]vV#޶M^s m4ɥpD^XF硳TdۣcLUEcd<˦guݪby@źM hFe팶ں|Ɉ>;+6ϘZm1iz.:'2x̰3( nOQ T68䙳s፩]sCm3aƄ׺ SiLrOgWʋ-.%.j]:L i我Rwˁz.r#I^>]6~l^ Bʊ)5&CYQ : ` 1]T3F96sVU͎H F:i>Rl&Y0<:pBF_9ᔕxnA9"<tz{nĦ,."c jF -9|]1^}"0L0hnӰ [PU.]*i)K12.QUiAAETAPx)01K*IJa]ldm}!O2lf^ Xi˵)1%c@%Y\"-ȹw6!Ѡn{9dcN`rb:F0Jq#Fǽp8]>M{՜> *hf& 幒IMj4ZdEdÝoǭl5nfDD 7LcpaFg]FVvT-WZcdJʆ٬cLFc@-'aFc4\w2vD26IY;~;劮A[ΠD@ƻjؼEJi|U†RʋYc"S AdqWB #uk;}.9r\s|gRLjWfg)`%q2a`fU֣g@ӪW Q7@u4s`)xk#zx +F+=`;o|^=ޮQHǴKڀO[b!ͶG";Ó*3}vq}=:J zxtϐ}X e4NJf %_zYI2Xv!?$ A"9Ț 0Cg N$:K%3֌^1-oq))֣pHJd)=bUYd% 5Ƅ̼ô҄綳[#~HtW $ӣxMw;i6`)O}Km< ps0 DXO8I%iV\nYl__<:]nG^v)X`0c`O}ȫUI5I]< /bSdRlj~Uʊ8qNVf+/qn=Zygp~l׽Վn|6IB_*6 Dz3<8*/M>3A^Nt6Fx/5.{[ZonidR2h74GԄ{kGtr@͟}x_[?Qwg[ ٥]F@JM~YLIa&W] Rp߀۳ŧ:_/Ns)خڵPi@{vsŅe*_=?>KbuXQ ؀u|#kk[ph]yDy>C7c_ᤧr?sY}gkԽR8]Ӱ]eK诔Xes6< %{݊|ს>hoKǶEԫoey9țan] 6Tmh.G_?ڙ,ȯWGj}tr2n|_~ii|=еyז/Qtz_<'SOˋ=dJ/k kTgrmLӐOۆSmk#b68R{/Vݾn/~˖(~B%@UFZVSZJ-wN&eBZ |-=6 '+#>fw/Q[m#tu[C'9*t_ߙw_ %5d՚Eb)rKS⥴ƅ) &K$R&Y\:%Bm[y7Nj:Whs׿wF40v8L=DCX3 y)hl <]*量g:J*)5 |Oԕ¶A Jr^أub/e*ilґub+$IWMխ4x,ZsnMUI[K *'\le=Kr 1zIP=Wk+j0h9*ۙv3疂F'nIT:yeO>JuTc "jWD&y21?` rIo6J>SFr *T("HXiZJM¥ޛ'{;P˫ŰZ,hR'ֶܱmܶ|#ztBk~tr|4% Z=w<'φNѯ\Mmt#>?TRtE[JjdZ6(e0u5A5Ɓr.)(b߂q͹( H.W-VsQ5lAJCz#c7snQW醅D)'bքg2#>֘J&BV};PM d)ZjK3Wh;|U y[S"]!s5ٍx/I}AnqW6QP`oxd!8>!g2b/|v ɉ)dD4wj, f( *:\D@-Bd*`>1W[! Qnހq_mgDD< M}0Qxl6ф {pV!B Wx'LŐ!'5 k쓎9&G+Y׶EKJ*仙?.xx:]>uv댋|.޸7eC{mЃPɆ3o,sYZ2*S8Ca7IQ{_jVo\徒Z2UmSĥ,gf hšؖ%dH t3g7u&.* Lor~> 8_lۢ?;צ\龐&**('p{#%9mr\[ /#%xW2˧yU; Fa =70UY~0}«9(]NoWl1{գѼMTk봁l9)">{T|Vew.ɇדVr?[iW;>uşB(}AMu146e}e6Yj6Cwu\JLf-ɇe>9y-a~@Fej~>\k 3c~l~]7vB"/^|"|4r~{+g74+|n;֥D{j}lL둘#0U]c,Y"J4=[zp5@8)i Ms*s ]ZRaeYn@j'glT0{bJu]K[ʘ :Jwr)HD-_Z_zr~JO:!^ =y[xqMZ2~hvA .k '!wXۯnqew?_Y]xL?^ w]EE츾Zy N7=Y$+6j-Yj-hEq 恹0,#q@Ԥ_:7yhLacVz(.cd>} -2LAke\-:YEi'E+LT내IM8yZOg+]vs94O6sG\>ͥ`U 2 eSq52&Uf[ehdF4^ xgt}#{C$GB"d-2b*eZ%UنtijnlQ7i>؏kmPlOk{jmǦ:T_՘ҖeUM&e 7ER(BTp- w—vA01V7cVT9(Qbrf> ;٢/;NҚs F&A6J`0pԵ"uӵ2l&GLV[0S=4 XUjf!:AJ\kp '.|4( 6g@bsmhwf:1ݠQB%ɛ\ce ᆇy5VQB/W z*z*@%*_B/ wB\p=`5D}0Ʀ OˁB"1\ă $Kfc }; ]Ny#T+gba8&E@JU&!`WY\.wd`~ ѵaz,9YB>: 5RVwU=5c'b]'_9.Ι$˃LzUGR/\vRO%OsAPp]*dQ\THz~޵7 ھ pPʆ…( J$RAu%%jfXV*4_9)pk_+( `eJ @)Q d@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJUbs&WJH tJK#%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ Rp@)K*j{%@2V@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJU5f2R!`Q!\grQZz&䖔@x8R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)G twi=_\Zo/w\jnקՂ7 UJq\w?D Bys. aX>%+l6%@kzl51.bҭ~r++̆+D<ҕL1]!`+kd.th;]!ʖԞpKW0d&B.`rNWRhWCWv˪nj ת௫E=hR_p@yem -6ÊCL29+~lX>4P9wt݀!Cɏ-Ǚ1 o?>RcBn;np7;M4 e{K^д@ӖhU8oѸeT`&~jl|R0$CA-?0TxY8 MŨ\]NSCTh\?J U,f㺃I/Ů9+~ne\2=iyRag?_"_a Z Szoh5ʲE,jUTΉGy XzQ_:6k̦EEQ_{}6#l ]!Z{BJ*!#B/0+B> ekЕRV ]!`]!\r+D} QzGtut5*y"ܚCW1 ]Z{OWMhŜ̈0:2fZ5t(-yWHWV!rjZl  ]!ZNWy+r`E6tpm6 t(=C+of9yWXl l+Dkz?v()|Et嶬zw̘{'EnP!3njKRV̵bY}v1_ >K٨1eO UOwVq,_^ f)zQưۙ\zZ%Xzt2.T`(mEq˶R>:EKO, Mo]5+%Ԭ7.|PJ(e:/rO*`WeNJՕҾ-~0=d ݉>`"qT1_E-p{F%J(in`wj7nY'DgPt3Gck͈5.BNBW1wBB] ] <ωl6>Bt(":@v̈<BBW>D}#zRV'ZMΦ4DHS./X/]/&/?p#Rl : ]ZBDWIWU=0,BBWtJzMN ]!\r+@˹;]!ʾBt"tNeDW}npu6cW'NWz+'鹌 k+x.th =]!Jɉ, pWvEWE'm/)]!JECHW9%uFtM>tpȅs <DWHW7di;,d@&+D{?3(5'ЕfUfnji:LgS)s̐@nZ%\{='/MwDެ#ʞymOgUn=7wrn4>{m\^flOVa9ׁ87 `cR5D)[Q&)J9(Ep_t]SJX>*W\ J ^G^Yt()]])a7x;t벡+@XէJ Ȉ'D>+l@wB=;WeHLNCl rr+D+zOWR!ҕeS ;+BZBWvOuvDٳsՉ^ :g l+Dz]!JZn}t*vlB.ut(%3DWUϏ3SU 8$ ҲQ!ߊ29|/耖 hdR虊npŞGpU{ :_[kc< ZVV۽pVOMzqыX>x?Ytl#aQ^}ѿ޿+m#/SNߌfP'k{^>)4Ʀa͗_ $ƣ?RHছ_x1#pK{3tyȄ.>4Υj>wmt>8Z.&M\p{ i/k$F{O^1.ǖR}Eg]kYI, ht340"Tty2΋Ku  &^ b3eJ_U}4E+UDKnct:? [Y DCv6x}-}S\z4͡9ע<ܟNSt)w4,9 Z>G,oC!7,p1GŪ_ͻɇ]?Ihf.M6_6kL[[oViMyn\=9_gxu"'av]@b׽8rp=N {UǦo8cBeQf~ؾ48vՌpKU9-Bw8N|1WC {ZF"<񓝮m"ir^ӵ4n^2BnU_tnQǙQRrRY>Bh.ՍJ9jq9rTمc*YRp&,J={K-G'{Cs8c}WQ1IumJQ'9ce`Q)SP+WÕާTbn:OpU4UʦEqW99%V mP3plJr%E-8q6wlur)b-bv-/tx7B4]qT{#pL/'&\}lޕoTF6xULʫR31T) ZHUixi]񮼷v*7:'}¡/97=-7{o3Amv֤@UM5wƑC2jR%u8f@/xd !_LЦ|ZFUزĖeCwmz?ݝ7Ac;r;=LF-wH7 ͣt7bϦ;:n48g.u6,"ۦ;9Ep 2 6bf㻎y;7PeN(S'.UoNkB_6{Bϭxjn -^2wn#o4-eh?mk\#|f͋j^:5 RJ"U*36Q 'GK+Y'3-䆓`}y>93$`Jy!D4.Pt^/"FSDȉ:: :L0$ k I-P6@LIJ ) HDŽ !Go4/]9ځ1֣cD {YwDQSpMZ#bb|S0'Yk֊s!6sˌ U,&n6E/5W,_<0O3f4ƽ+7~]#NJKן)yܣʞ0{"d1ťMqK&h¬bEI/}<Fa4Of׳0S$t0m?5PJN'J3{gΊNu2oOIǯ5)s\0P~ ~8xxASԴ2T(-(y ZPC ~+O𤗣pGn?O.Ù8RJUUiRL&cpΊmQ,\Dt Zڏ Uoqiب"BrW>kt2/?R86E^d.I{j!x4W~m޾ғ߿_W7[>C[MFgk_w5׉˵(pݠ owmDX]ahu7noUG]I` TMMdo2 v%MPφ;-ʾP&L&Yy,G)LIBU=ǃ#"7'F,M/Pvm4+iOS~-3EurChLe(A[w7m `=0dEu cCs;3ѠTЄ!k[2rm iXpdQu"2tQNa6?Rm46n.&%8q>V/ᨥ %JF.zrg݂}ɳ@ ըW6Gp;ivNib<7nG[Rl{Vk6]~oUy(V ׏'vEQTTO}^a,kYq:׾?3⓹u?'!>_g{5|j#_xTKRiϊ_ͪ_(E=yk쬘1{_.`VT4'һ&5ˍ]o{8ѳmγ\cKzJ~|o[Ӎ!v5N"ݠqףgU [frU<\dn28gD={#As/ݰ?B9a'crr甇7•\Lg]ͽQ@ّE&K6v{C 'R 0UDAb1~l*jejZTkd{T<L3Ъ!jfhIv[_S&\ۘ#09%}r% < Rux4C^(hI̝AVY];o~9ks6! =1h D #]ШAȩvN:p{ )QfQ!F`8" I\﬍^B*fe>Ph>6'ͣ;Gs'烂CtFCЎp`@t%% Y <9f=+h TѭEaù Ϫ/ų`w7e~\&׏{ϣ?_VJ ?nzϓf~(kU Ŵv2qY~J!To g.y:֫ Ϟ7xw@)UЦmvŴ =$I7H_ G=D|R}UnM^R|p_&N#U<( DPd^%-I8rng{u 裄Ogs^ĨgţPB AA Vi J3#͚H,H@55a(,3%\4iO=p,:Ύ|vx q-ȱ)"%RK &H%34*"*F[5%KR # to#;' F z$Hi(U\Ғ nQКIqyasU2yT:Slpv)ߡS*e2d,wXL|hT`m? NAwF_f`Ye>nhI񐂢2" ̋*V4@ sԛbmzDplsBO:#2 43V2LTH1ʅ s 8>pdca4>,<]QCCtktΧڤ$#lRVMYK\Hd\ H8bYu6vvQ8: KORc7<2hJ.hyRny A45:\\ӵ{Dv@mlQkh7ra@D*{r͖>f[-V?-g|X ;YX~yOno1rX=ak>yYǥwC}P͙`G7A0-dm|xԖ e'8b$*\"vj~(_ %h[>H\zݖn~h}^4cZv< :y <Qa"Z x&́Ȇ[&!Jgbt^||#!+2[+ڣ D&L:s$9V:\Vk\)/;v`lږX u$#>.]<  (@@v H4x1j5xRTB5Ƅ|.*Q Tة&l#TgTg}NQWN8e%#O;Z[-4=ً6gp|vuUq_VʶVQS&~>!'g L.+1G}U+eWhBZ)!Bm^v*ڕ"&Rhx@o%'=9\_"+CU!isxNHRISnd2Q*h^F#QF\q߸a u͡T x4cg?c<]o,|YhN&oħR)"V!cG'8Ks7ǿJ.;@&0و6&𔹦Hli6ofIM".l_e̖06*Y;cTFRtn[k pc} N:uEElw-U&]o:&W/ȧuԐ7WSDن64,z(8p\P3Qc~INݓ[6wZZFXTkeֺVhD]"xŬ'5J1b0 TPQYj9B(|1%*ؤ*UĆdrr97w$X{}~al;ݡX|}iMf#4vA(;ou'Hym[G'd\ xzFSGsH@}RLzM6C°DO1KDM"b&L1Č)VF9Zo5q,5YvjG%Mk"%1Z%Aqhl>+e Y%u`k]ɹ$Bs2y}glp#=7;lK?w}ҞogOaarvyy+uOo‡S:ʔTm"4 (5ݠɹa0P/3ߪy-0[5[5ִ\!R ]`}RZgIj|P%z.Ҋ dB;YEuǗ;49yXXL=H%bo_7{=oxu>Ie]+=ݛ޺=~=}YjK.#肵Ti`c 8烫ֻ$9(jɳ kkPk 65|־ըӚ5'U0u?ig}=xZ및^ޓpO(+r_7:*,\h{),j{`QR k ^sl7l>~;u/az^N9u_+YֿiK|kvۯ8XjIC&9Щl1ԌriJ)隝10mWku'TKBVTrL)(ĪtRkʴbcְc{L׿D\3o7W)udDjg*`K9B2BR)ڦ@I8WT)N;Q+ M h!WjlnFJVjY劉'ӆl>@4ysd/^H)xqVE&',(b&uuF3v](~}wGZ;;K|7=&8/o_K8Tfx+rNľCڊi}a3&(]^9A6ROo44fɢJ Iہb5Q)RSq29q:[e2N62‰+YYxHv^,Q_\_el~_a翟|6&.+ZFAkbR \l5yJmLMC Ţ%CMVdQBkہMcl˵Ar1SɹlsdԦM  EUm^%oUsnvTxyU߮v s:[͠vN:})sOBc-SZLC%Ji#{O!; 9ɪ}T֘g | jlF`:KrDi7fAg2heI,5L-O=f- 28$fF EA+zXMѲ1ԧMMw'`g_--D"/l7/o ~ͫt4BJ)*Z6GTU57hK2|4)(~5`5m ؔ yMUԪbqI>d 0-USpl#j%@iLc|6A2RETT0Mƒ-hgk5}:}_`[#1B9uz3bٍB$SfY1+jk'| ^{bd&c tGϫ.?eiimUϨu)I9`b=@Ac2f4.j \; U]֖, ޱ,g9`:nr7\^r7Z瞻ATzn8 Ht+0t+K\ OǕqEdupłW"sMM+Qq%*ÂCĕ%WXW"u+Qq%*n WΪ{D p%rW6޺rV2 WX0Y D׽Ze+QU؞Y0@p%r5+Qkf J\v\v>p>moE_ln2d{Pڸnm`;?j˟VWo-|5Os6AgJcM8"9@NO?ݨ|*_ Ὰg% '?|Vx\G*r:?_)'I|}\_7|?'5&k_Lff{'z=l#K/- f}`kSM"bZ s7DnW"a7+VkaJ-:@\Tiݞkg^z Dmps4q"|@P=YW"+| *=D^CUViDgQnAQf J KT+7r9l%2f5(=6A*5۝GnmM "3WNڬ4[/$n=#Sz@epH5h7x7߶1F {O2N~lq*2f6Ylv=_mox (z_cT*Xpu2d#\`g}J[WW+lANQ^W"ׄ^p%jm;DVʂ)0v+/d*\\W֫UW+T W"c]Y\Zc+VBGrʁ*J=w\ʭb W^X~p%j;D%-:D\m);KJnAVq[wW xӨ/ۮk8gvn(;8gLSkTbJ?8Ӹ`]Y]7kC5d+8jlژ^V ˳ppKRDSܽV疆iR^Kp YTag+Qi`;$\g}:DeW+D37NǺbt/FW삫ц: 6U Dmu*Q-;쇈+gPem7J׮D W}D0np%r"a0{gPTeqI): Zu+RW6q* ",^d]`Ob?KVtD= ^[iuAyoDQֱՙgq嶢Җ ~곌dChFUghZTBiAخW@GJZ DwsǕ \ 5+W"U/+Q̂ĕ|W,xݣ:^p%jM;DW+tf+9NwzƺZW,!⊜A4v%rWU/z\]|OkW,X~p%rvZk+Q¶pp\@L Dn?;W2W+5֮X}u*\\f]:;w\ʰ8%JNyL r]7a6~U*XW]o@ZI=f##C<2٨q.21B7YBn9WKyQCٻFcUG xȾKI  7` Q[p(*!)RS(/0%qzjUuug9J68zdz5fe(,Gc${\q ^w 8/>-8;Kvb#3u=Vϓy G9I|o^FaŮ*!gv+?_eROd֚՜}äѥ^/Fս#@ȹ\3)w ?,wDQSpMZ"5RCf٫dab^w@n 2ZWda&_f3VB99\qJ*eTo,_a}i/6 #fu^oݴ_"Ra/?ΕDT B)74w'=YBX@d)Hݯϖ=in27y\MmߌF..{FZ^?|nx;WToFb^}xv~#e7bqu"ԏ}h>heiBJzcI&Le5M{`[ ZO6Y TD/Yc"j˒n]7S~&"!Q>p-5u1qs.AE`jlEmiDJo,y\KknE6֭B3WukK5WB~\o[?f|E2nj ux!> =S0-NPD26sd4E*vBf Bx@྇1 T?+늲"Nꌥ:f3Ǖ% {RTNrMZ+, [CI<$ \pY&1Zmt- U@ !Zn[Ej:LccO qrBi4B(KrpD:BQø> Qb 2[;HG!޲W2 TFdO-ul!W7k7Saz~ q7jVN&ҋE E[:8DR񸥢y귅5?/nM62 cnglLPZ[ t9ii/&?ؤI~HѨba]7c¬*ì>6 `S@]!湅wCn P=qfI9]Ul}ԩ-/Zͅ>5ϓD 8tb&σh7%ic/9sc@7^c._'⡐C) r7#֛{8<\]õl9i'v`B; Vy6kE\.-TP["Q[h尟%zWѴh_";٨β~@gM5EiE@皲E߷:nfWlph+',/+yK 6HH"Hn]#Bsˢ99,=1h D򄁑.hsU, ݋wTHhhl翧ͳfٞZ֣(S=zVm8qϜv6}f8"S{8,%Y.2Lz FoIyx}$\fDm6Qg2!6`ޓ3{c{JڽiVqIhkrׯ"v],zuHc ^? d|( h|:௪CtX LcFt񨬙Lv 1(ޠ,%eG?:%\oS̾΄?RCn5,$կ v+ޔst-hah{6 {\Ӯf%d-Ճ| S;*0@3֊X &iJ-.EDg1F)Y.@KEs1,~.&8uP5 j3t` S2XDIj4 Agn:( l*yesXD.bRxTu bg?6n"{/ÄoW'/8aWL(;ţ,`uhĸ1R9A]H^3\29 :M%0$53],PiYs^ĨgţP`elVIpfCS=sY3 ֖҄̀pKD O?yYޱu΁zvzY;LA˧ EDj)Dd{@F)iZS]$=@*p{J sC}~f qD ̃'AJCLppCk&ANB2yZA0s m~eq9BEü F^aJrǁ$| kI ƈptb8Nj3Lm7|^ 壉&C ʈ0/(ZC$Qo볶uAroKT3yoY5r@ %FS*m\x p Ag;1գ;X2TSю$!:5&Aoi[@fBؤl&H䲓EA¹䙊M+'tUl\rȞ+S#RkG(Y]q`Áko-`vk1D7+lZz8]7aPU.{žK<=oM2^YPBKk pZ)CTQg)po:"6Jr5Oq+xb;>!/ X;aɶ}ַ };6W[W}*.?~tZw[^EG- {r1}e[aXr~¨-l'DK|4؊7@pz_`h&.ˈ" 2̖6PWza329=i~BӼ+3OFP `dqţN%Vj' L ~ L } {wg,N%u;˝^3s6\<<]zKms/|8j2khnM?EP&wVW4r2M:S`1F[غn]w9]_m6e-˹Nv}@ynyVz>"!˿uIVAX K\tydMfd!LaVOmv; BX8+Klu]l46F'<ḼCW#sl\fF>a͋9#+K9#sԩЏoI~(٢$0oj_}>{ƽoler,ݧ->~z/&]-;n) "$@"q&" HKg(F*,ZX,˧g$I(YD$"S&ŤRC $9HRD0+SA3$ĭR_&2QVw%Qw:;pLwܘC7{aE6doj%R&$T4Xxj15sA]r7;51zUS9unܱnun۪'\N Qe>^+}>ŎGfSJĈ1̔)" ! kr^Vv肎^њ'W&Ĥ),ER)A,?51P b6z02' h *(S 674QnCR0ku(L"u1X!IisT)[S]Cp )'а[|q i)eD<u'-4# P:v1ER;ҫN˻|ѽ|2(aKP*!R*3)B1-/>VϚ$AD۲YsR&> d1[P*Ez4%HeNĹ )RO;uAB^dzNgoKڭ'ݺ8БIUEW* N$7rNטU' Y*釂D!~0HGC!AEdH:bQG BۂK(B"(k~_H,R&Qc5ž4DQz\**F5V|}ސ87CVhHW#_.{((Q#<A!@]8Ve7SU9ͩ>_dlox|lUzV[V5,uZYG ko?[O?|w/iQJN)^|͒oĜ_.s_كg/}j _3/]1i箛qf>M>-|MMae.ݯ=rul>;9O{NN/|&(Ҥٛث+M^|3am|8:=Rc<=xGʵcl,}Xwlӻw-I]@;Rv;IS*/fdf{*({@*(Q7֕0DVP#"T*yQwދ\"f&I, ie ]_#>EW܇~, be2dV닒Ȉ+!j_aip 7:^>ʝݴ,d N3ޔAS:"VsOsazpa6?K=߆N():A˂Y 'Gc 9k ׎ItHyDz_xe% wv/07;t9#Qߪ|9Vڣ; 9%ޠ<I#9]  eyRgKtLaMR…!쑁ۚk\AZ&"Nl%3ZI/~ί A[8E(wY>Ƚ\- M];ٻ %h!Wxr S==J5pB9 /A(pE$-B0eu:Y#+.ZcѵXZ`$"UNVkѶZ9W9b,)KD)%0UEy[.$Ca)Gc2h3qnI(w!aS7l 8>0_Gs᪑>,D.`0yHK11`TZ$@vf+Mg@)cq_(dyX8&"EkKZsD>baiwpt5n'Ci4,К?n3%մ^y|~Y'O&~֏\&ȋjo(N*ߢLIm`WVR\ %c+Tƥ/)(b۞BTb/MON/b9j'6S@DTkL['kUfqW[mm^mDNN+2eyw\~daV&.LGgÇO[r ’g!J)lWh <%)xR,l3%m.@c8Tͦfd ~d s;|Q3xSR."]ݐ]+qnhFs(Vvjcc.ހ&'3hDJJcU O6Y#c.}S{XؐC ̐J`tlk(dTf$S!RAIup3qn=_7`q(~"ьq4^9-YĔ`,M(ie>)GeP)8J;)8#hyI͂"IGA6Hy\AJH*EsE<{QGn5l%w]]D^BF43RBE_$s$ 1u ]܇]=lCnp&͟E FS z̓Q54ah:,o&9~}ۣ_>_f͎f[\uZ/eBY4 vPln(շ^Ys)!#A*{Tʋkv@\0t BY~=Ʀg ԥX0;/sh` \#)jsP!B&E$%j[*%Z@ K5YU Cj] 87{&ΏPw>ѭo{nփs|_ oڋPJ 5UlPJ" x 6'a@~i)[ ֕0N&w޾m雝lrC*NAoa-Nzi>:/a! 1nc$-m~o-)Sʆޤ &+hP\zdq+2iBnl-ǿz I=v}E{~>^qDKV6\=?^E&9Փ,BD]=6^䘬VMd+H0S[뇚}T9e#CӢ%PHUh'U֧*U+zFGt/K?0*FZ)vw'3ZCfF2)γ-c桉\oߞ/H.nmYitۆL6omҤ&Ds{]!Tg11])'7:{q"Z䔄;z !ܑlQt#M#uJhl'Fid9w F`ߧw]4 4fթ|/dC-ïDh#!Y O^E4M;{wJd|HEb5Zڋ`3<%cNօM,ix_O͹x AҬ*ҋqZJh(%%Ɋ] seNJ}Bw:U2)p6BZZ壝2Xrx Vj/ 2$C'Wm6eAjՌ$ԓŢKS}h5ST؋5)0sS@w"R V&Y)dW(! Q STK͒Nŏ@fɣ^f F Uck\4@)5>:%FH .Fi:Ҫd :AJrhrxL Ρ&`2 ȃVsHV **e^i C.k6FzkP=Xe5h7LJxKT]ж+9P/O3Uw[[h'r%5 *XAH`!H Ш3) ʀ)w!n BM[H [BcW,P|^uE8ij)ڥNn Q!g0Eɟt7,V2 - =Ai,*AjQ$Fb1;L ;^ցNlgTGJFdU8FтۦwZIuqFw -4֤U 6)j`%HI22!:sT-E~\뜼Y%kP׮BTjoZ 05^%!JVP8U8 FWH ΀z0b`eRt_H O38%0ƇJFD9T5: m(6tqHU 4 !xކ`ݤ9jEr7ri"&r9\PY 6iU."ơ KO]q$* r6R ];z."B2@l_/j/n^.϶dKsi9mWGo~jII.ύiRUsJQ *]vao6/H"[1'l'վ8(; N5@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N> ' vo@ʣ|N D)=;Zv@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; >9 fD'i7N D?g!JNgV@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; @6h d Dy@ֹ DX";! ; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@ tskQwkW?^-h)z^߶7o6|.z,gO0x'#୑r~ohC QZ!!\Ѳ'v= •~_s+)? \{=^-.9UxBI9maA^niA[}1`.W߽|1;]?Zћ'عTs-f2 syTn#/}~v|24T >[E_nI ' Lz_ A ClG9xPƊO1nai8â 1;2VN \QIpEFԊ9•[qJ \Q \Q6~pEQnWZ+ z]9~q8^nu.(-˃/B6,/Wm a??UQ41̳adn^lwe8T_rOhV"KcSBt%~n2,Si_雿o^雾$h=>-8ObFT.J'i0]hk-9 D*KRT㳗0I]p,cn,۷.ߗwFͺݍc}xs/p_(G߭E}uqU:K%Oɧj-a!P^]oQWYdt k DASѲj}J~rg?zUKW~8eK7[~8 5xӴzM/GGtƏe3%hPr{/˛_;L86t3//V5%zNح5پPl.i[oQo w|Cv߶(վz>kWǾNp`7:Ъw5v |a:9ջn]O6n{?0{Yt?_ wG^R[wDZc0v$[)rؕ?~ˮ9ySHv_;&:/ի.3퇿>Ҏ'';Gm|IXnl ׇm1;nJD/fNpΡ_ͫśb{IZmk#U"^ϳkhV 9/чWC:>۴eXc^⍋On~<9.oo m.dP8ŇV]WoKK\f\L1ǧ0N?'9;=rsaд}9E 5 աd2VppR8B]hx.l TpOJnfσpanY{Js'E[J=Jic/Q6=mǖ潪e%&6TMRiBJ! S&VM*u:`ēwɻao{V7a<ܝ}1L%9X5߲D&D?#+qq;]cĈˇD)qM\3_<(R")JYMə_UWW ~fhYhs *@Lr L|Z+i~]j^Ek6~ 3Ld7 07; R5 Fz!r:nuɜV)Ko0`^'G]_XpHkV&.1%.W`1Ľ"&霩cpBscvU `VVp.zicXf xX:+Eh z' 9ڂzg8}P}3ц EE\5F1Qf +0Jh$*1% z2iN}0 ~mڠ6qQ_C*P6rU2Ւ,s1ӆq\3cY*GaXDR9_va\JJffڔ ($F3DI]rgWX(ȹ>FUyDmcf2 ~5V ~O0Hc|(qhڼ51KpN*~d>UXOt]w=>ؿn6_2tmfǖW1pj;P˺T.eK/T[ 4&I='Txp{3Vy _k-m@f^I%qWK}{KY+3WO}yH$ĔBh\Nl"F-'e/u0 )h^31sy9)풋BEKwLdzf>~?s3E-}e1Zڷ>|</ [h}4BN\D|`H3ҒpyJH=K~4rƻ^1|+!ˠEF ȔLfŤRHuIbcqĐFV6}{ Z5]yJOɇ-Z#RRpfiJܦ`xyAWyfI*&pI $hW<(dvgkM)<$ 55`5FQ2J^y0@?w\Q9?`K@ F`<|K$z%!zB!䕎&Ja{oMfRX ]Gԍ@Qۖ]_O`=-68 pjp;] d;Ӳcm~\+͘ F2' j&s9r \x08bh\FF>cN2Pssqx4={T.r Sď&G0%GØ&hq¬|EI/>f0ߤYY)0R@r? t~ν˨.F?v~}B "ןϪ号U_gð?h'iz~N(cٳ'` -,K:/RЂ wAǚ;b͚@Y9/W.3 Rʪ8btj9=6cvy~̚5N@zP,|ӼAy;uJ*6N{SvgYaCSw_-?]=Rqh |; /i~Ghau8z?VZt:zis9;GtY9m6M֪d~I?8@ogTN1!6hm8oJ1-̞#4Y>l$[׫e9Y:T¼s |Yϗ)+\zw .r%Qڜv˿o29}O[$lTER풏 j[jf?7mpi00Ӂ3 uq5֖[I󞟴au{&F}\3@[m6b3hzjB[tGpiiR۟DA0p{c8IĤ0yH4GK-r\]N܂)i^ޜ.[ln1׼Jg0̻&&'ŵQb'a%Vo]#RM̙2_#s0xjiYmXނ%gp'0L6T8EaP 06u'\V*#EVyPn(]XG(Q2j5v1)v [6X,PѥPv h'Ft'>>>E7N8EXA.!:y.&+Ѩen'٨*L;']~>t^Pz(B9n5Tw}qyˢóqR2(lL2 wBΏWt^{ k&(?W,y4~/|H*a#fe\(E=bb:L}9NNיМ$ޝro.Fc=|E;r`!ɻDRMZIa^pC<쇣4LAebJG><@<|Ϛ_,m`EߗM\ׄɎ+^[`o $,KSSklOԡ;i K?k܃ N;M,UߗS~o+echfΌ#U%${asNAIy$a/Ic"rϙ {E31B X*`eb[I,8!Sߪ9ҬPb FN,Z  `IZ[U,K#Mp'N-Ykl)g WwG帷>H)Z m0QG*dyAQ%iZS]$݋@"~_݁kE~``2Y< RJIi7fhx 뼰LVkP+{;io']Zյ}8aڤ X^arYZC *X/hEq~k<xi:xܸ飉fCa^h"hE9 $qz] ^省EІQa[^tW5oEp% %1gN%d;F:1}1&T,z)9x's0V|g!Ѳto z6TuJҫAl2Z˕@Yq' TlRjz܁ޟ& [=!<,-uVdA3Gz} aٟ_'1MV8:dp[4lapo5\vtbz;]#[c(x0Q,xH<+AG%]ko#n+|Jb`$fqdb=fX^>òCnۭq/fCy}_[ӎP8ŷF-M 8I!/V`-O3E2k(o`*b&]ںvh)Meke?~~ECģKxT"ǽZyiNz֋b7mF\-(M9hk1WIH_=)XdCl搢C-Ɂ/瑯wN.n L.]P ¬'e͕-Ew)g!ܲ3RG֝_)Ln5n̝NrGyЃUZ1œu rL(p(dE -vEߢ>|LN)"2 3AL>cLeY%]ٮ|gm=QG:!u^ O!q685G#g@ `=.OG&(p2)<`%yX;Xzѷ)zb,ɳ *P &1kPZ5i[5)L-^` svQw7jAkR-6zyktǯ4FW-JZU6#;LΪ aE4[}b8lڕKw]{幑$U\bűgZ =&_71GR:=W!ڊ +iEQ/*Zԟ'j'EqXE-\*X+PC*z r:A*Ģ$NA ,)VĹY>cRrȊٷQQfNƁ9lb4};2%ʹ^~eMNkp^}1_??tA%R;EX΁ڈeɲCR3V}@GwJxXU]͹2+ol>W-Fur EFCѕlb0-`n*Vƾp-|T[SC;8 ,-yA_r-ڃWԤ>XDQaSkDR@k!e`xnl.' PsPyhq0q=4e#V׭|Ћ#Xǭㄎ @U؄ajp(VMVcgjR^ 9 sC'\5i5\5)Lj[8sPUh0vsդc޿VJ|+~zfsuqzntu?i39rl\\=]rtU역X][hʼnV_3~iv6?_^\QV.7LTнWؖ9X[Wm~G|q.oޗ51|7`/֙HG{ʼ*Qշdf嚟fGjuVU{e~Pk6[i9k&BWjGI jӪK52fAfYdK.T!"wB̵*XsDA3QM3nN4{̡KNF:v9wcofyg"p0YCΚ>;)V;{9ޙ\5:sՋ\iJ&%Lh2t@ +'ݤjIz2W/\X{ڴ뾒eŧ5;g,^Wby.O!/>֝w8L;e8dSǀhXY ~-{:'&asٔ1m&FRYQ*UT*',ZvS͐[+d)% !Yᗋ\^]=Y%`_\>rt~Y\cߵYδZV,}tuk ]?껰ۼA<Kς0L*5@Qk&.˅SG.&e: Xej:j0: G_Du}ᛯMl+A.A;c;[Ֆ ecg&+;Y; Gcb 5J6wL˥;O܋:4jj蘳i5@PUq?ldRtEG_;gI?.,IkyDkחyH7]#(P(Bi-pEY=w_}\眕w,R|<މ-U`I`=5h(Tߨ @e< _8yzP@琣ȃeIdSI9PNRb`" H{GVys@9k&9ܙͱVpAeMv4^R3O=l\7}+>%,}}k;:YjlQk8zw3mv׫un{| \mr]˿eR9}Qg*Oxgʷ>yAd$yyש}>_g\74t7CrIq:y h(Y>&[ MUyd*g$OY @2״álkS1PԅE =eq0`Ϡu/gz.\uFA񾜧h?М'ޣAqAE3g/ LOJy3 OIN][Q{I~~2Z3 ѺHqOq\dҟ^d6uXΒWi>%'3vrn\_]\ ޝVڟVtGLd,6We"36K]s^ы+Vz]DX,z1~z$1ZF><-=VьhM=;ŋI]x}Q+/7ަjXrYc !YK7FlpOWgX. fToJ{;I97 .V>Mi5 lpMjڃ4ǴY?@r7%Qhͩ/͹\|B$.1f ֐\i z奠%xJ}jE;U#=i bggy9e9,MI*Ptcpօl$$'3x;>iq9Mx]::S|ПG^שiK v`,[ʖw`@0 Ž \b6Qh*@zlJ8(?C0L(8`f 쭒7Q$&uV):Tn &j2J_XN4MR& 1'1$pd6k89ùCdtc2D{{*Abơ*OlCCt{-d<& im}QSaT"lv&m'ZP)tt!hB%bc S9)]x+A 3g$F%qb- a461,)㧐$Nv,gk)t/~n\ 3eE\ ȣ! Ɓ1\ˑut) R{gݑ|ʒ)Xp-!iHGpEV_E#A25@ΠV\L;i>aq/%m sNzfsQ\"NDq갬3L˶CM6CɄ h\2樹BbLI!1ĜGż mzDf.Wu_39Td·#ZGL9wHkDIX .!$ A.#N<`U_39T:c/ zYϗa|qD4jT!CJ$JI0 lW_z ?th~q_"wVOxF/@heַ>!2o;}nwOO~`!vfBYjŭOޙzÓ10Nw^m ]?>RvTgW4*upqޭ{p`VڡX`?mn@H`^LB/H!*5ۍ^#]d>!X_Z{|,Kz;RȽ6E?I:gc)QAYL*q"H %L>K4NЙRD8F#g:vvFΞy𱢍U)ɣz`d+q\ gvn'5=m{co?/;~;8>>ì+ឤ[A0AO!'{ 1,zRC H*XpPx%×2h dZij*8eR,:,e$3 gu8"uDbVz)2 S^nWVnj∎Cg<.Aܯ'eIׄ:Ht[m`C]<4|b)J9n5Y%)Z\,I]l!FO7g:_AlU=4O"ƛe !zx{!&) K#I ] eD-:e4O'u'oe+ ;5P$J} ,Z11 Z211Ƽe `IbΤ:7zZl}Gz(ڲ l#'#ԅ/t-egƍ X)[XG$#qQ&u0c.ZLJ!FReFL!F:[ {Qf _󓟧8N@eN45t}VVBXW_VF8mb}5&Ml2[L_Nߌ$W'?~Ixֶ[z#07 E*O]r1lz?^]o'fozn9$kde ʕg+[3V1t1E]_2&bNjv\ 'x2_gwgUέ~C~3i6֠ǿJewՏPÿ@藷/_ӻ٤ `ſwfK-{^ڠ/_}qzYn|?l㣖f,JMwEowc_c6i&]]&?섔:6q:_ ¾." ~~Nf{ vHb:k$FwW{A<-t/9Goj'[]λqx~O8[#~t:()9zJ=x9E˔C\ܕ%4bjoLʐ 8~  -+mp`5$R|-/h=nד+_/dVZgf;ţy[цۢхfCZvupOȽpedH@DP(,\ֆ'.圂Kȅ@ϝFo>XcgO@/@o)XLhIW:f m瓊`4Ty!y(CrPz(1W_oX$\vܿA\□T T' e*LE\ }7ʞ^̯NR|3 0cΘʫd+RyϬRfH>Rʷyʷ]ʷQ-IՋZcd8R3QȘd֜ OL؈1dUi+,rS'ĭVyx.r<V!2=x` C'˟Apݬjs6=Xez SK]r>V0z fτWV|:Ecj6dYZ\Ȗl]R$ k|YeJ$tJ2;49Y]"~Y$cYgåU٢&2hA8*4 $&UTT~q=.+qszU6=|3$ۊ{LY;A'rb`&BQ)iYeycB\J>[U }t*_ sv`>j$7Xڵi7UcRN+UѰ\g BI `Rcug=>e-#E}ͺݑQ͵ظk׳a/Ί aTVl|Ά A{nr @'i*rN`HCB2j .z]䅳s9>F hLg2a1{Vi9yl}Tu>GSifxBƊs!v>&1?y[ ޻~h Z[Ϲfۍ݃NJXzayQ3AqeNJp"*ҞK}Y2D[S:giDyԞpqYƔee:0 Z Ԅ;;#a:g8^BbF$GdLiꦰ[_bGwri=>R,IdRZ^#hx)O"#VY/t[$6!Je2DEJ\ `]/UҘra gt cY錜=-dHJjay :HW%Vn :FE<(s1& 678>XQnCmicb:&敱!Y>rnR SSmY kDxBKN=(@`j. c}1!4唇$*[ALVE0D&欎0ejidG26O 9d틎inC[4J.J:OoP) l\:_t 1&\9omTUРd$C=C<䋞 |uoν`&Мi3&)R'B$q2dfݔFR.wIcL:K@cXloclχYFjmSU$EuQ*,lXz2"32CJD3Kk9 B̸El@o18hOX ܠM - +`%kC}!נy??(I'iimZ['vr쵣iG9Ɩ<6oԘ5HFHGl2}AAI:xW?lt<0Jn1 w|WGlwo~xڕ[A{JwѦR$zBDdAɰg|g'V\ ^o}m9⽫Fu}g_M#/-N57^GͥW{Ѡ)h~dwgm|gv>M= f8G Ȱp8:>_MƟ'b޿녰nߘ;9l7(>p8L\C|+3ނ{'˂mG G(`T(M:Ok#zmPL*td-J] N*8)'V 9\(@E"Ykc!Om7ilv 2 wktcn>{ ;2ڮ{KmG4U&MCab#U/_PuL!XyeB Sq'c,+WusOQDqc(&ds!i ` s \X:2?T*[&TQ؊ ]1Y'5iE o×>4D nsVDY|*TLp wXdl/.]i؂YD}j++(:CR,֫[R?lܻa& oCN``995Dk4Rf˭٨3-Or-|H^ YxcVʺi,6®juǏ(-;Ok_}Ty41&2.H ʢU1idm7)6Bt\`фqY)TkBCQؒ=7F0Qʶ>Գ8dzr^g}56EmQ`W,Kd/ɶV_ScakLg[0(![]te+ay[2(Ş{l೽0( E -NEae02 ã,R")XȒRR)sFWXfe)"Jч8e+ KəeI%jA thkphPH[ؙc< D #3gA̼Vс ꍴ."h q! x 8\(.fUR"! 1B*i-pT 0>@j>.HINF>}5 &'hsAk4|=\G%Df7:Yԍo][-%I Ѿ&*+!LAm(G܆+!Z3t%ɢE9. ZRvsRjkJJ%fI"@ɑBT\279i\/HRm3ckp5]5.lM2˅2ʞ ʅKDT9i$|ini~Սi5)//ږy19=C{02@bE@'M\)dt=/n[}C{?>-\]tO9Nlw "zy?Zel-jU:#lxa9C5 lF$bhxWEdQdrX*Z+<! ̧qJF(j>s[ ,Ho-uZuۚjťJZ:^͟EC&1~j:9?#\]5ỗ뺾雟GFϣk9UrYXSNE1 -m)P+:YQľZFb8*nT*͸5jS^^y4chA-x2m+^VA _i΄߲SWM"y.K1 rR:XH[t-x%\.H] nHS K< Fih)%Fľ@'=BRǶA8Ā}Z]bkGxǟF$W,cbMŇ-$hQwP Ŵ%or 5)xCRsco+w=mN]}8=;ּnL(>R#c,Q:ĆjYj?ƣi7s~̱뭬QQϼL?_sE7DCXo~xu-..͵BgZ+s-1s-\3 S+W5{OWҺ!]I? ` ]\"> W2#],](5+B%缧gHWB G׶ ]+Bm_TvJD庴ԎtvEhw"t (KKXw$FwvCHWVQ6;+tEhw"_ztt3tEpeg+Bi{z"tKs_9gGQx%@#Gjsn(ܕoZؾUfEnePa^Tdb.ͳUVA8}TU'S}+#w 0j"Z4&X*yTF]ϵbk)`.3tEp ]mA<3|OWχ;?ķ!`q•u-ؽ+D*&oOWEWo0߭pvEhw"t tLtvEp t3NW`q3+%^ 0AE5-OO#-  Al<9SJR khPg_a/?N>(GG&jZ"JV.; եgiw;V樷,6\k/q v P`/<`8>"* 2X^BDrKqyFj):ZZZкZ[LcדeQ: =Qp%Ҽ]qև\3Kdw;u:Vy"Ch[lZ[Cm>n$l<|^&?ZVV]yjrcj2CtE ]!\DW_ #t Xmז[+ +tEh3J{zteҝg$"3tEhw"wxtKtnCo :c Zcs:ldz;RKTՉT77?VٹT ͆ vuZًDA=?K4*BMӕ?;'/Y,=~Q/K[k~?!dIRR#SkPlJ(i94HsĻ8dqr .I/~49{4-ZHbL:ߛ'>"n=r][BdGt>wگ/߀.(7?,j|-܆<xB'rV,8Ҧ:6 ON^<_I ~qdz9g'8d o:b:D h3% d.B :e+Kr,: K>45+Φ' ;@/M.9CӋo֯F8_\[ysR )]xg\s IpJo鲖k-demƿEkeDβ_8@PjJ҄T,VSB:e#}RN Tt\YxM0F[NY UͥYjܹƓȻ"u/Xa b,:Bd GD8F-n{nX  q](l26X"1) #pB6%"=%idvـ+P\j/z߉AKcMMwHPx ޡj%d#p$ .q ;X `S"y|54({IJȞꙪ_UWUPau(8yG-ԭ"X=eL#;ߡ56 X"O N, o, 710^qKya&5b^HmY W%fC,kܩ/ <`dw 0Y>9ߎ&c6YIIjQ̫2>^t|PT04<2]| We8]xde'L_}^t}.{U%:ɏ puuhPˠU32Jl2UXHr48 yYq0(q4hqҠdq1`9&xoH0Є39cX?ws-}q#9}]OWP0;}($2cGʰNt$ -D4#FJ-Xw}J+PR=K߂h m"N`ci9I1슝g;& $ƒ #/&jb˥rPZ+%:-:##TFBSYaظhn=Lg]9ʉQrzLsfR_z6RzϘᴵG-! `2DNH"A͒'aXMTR#*#+hZvmhڱuΞzvx q- _ʶ:4*u*%'S2:;q)g1tfYL1QmLO@~Sxo>,GkSU;G2=?X{o2!ʌg.\bR f[xF:ڼ:5$l[wJu2;_] ZGK!zƌƁZ1b"n#,8[7>߀Bu`h=t~8 r.UZyiN8]٠qi95!鼧ͫS]}pe}ӫͯOlJ+(qk5vhLMz_4#Hz`yMqMԇ%;__~"Ɣ3e0n`3}Q, D+Qـn%3A=Q俎^s(`#TiBe iMf*Oz 4)>iz@Ӵ%+ቍT{* Z`D*bX+C20k. 3X%Qu!D!e!x0kuj&IDk1 Jl5ͭH "G0;cg6^<}>U bCm6IQ2Zj7C\y޼To~UІuG ,i#]UxEQ~/Lne-Of v҉' ń֒O̐UZ)By͕$n4>0rpr|KmW ԫKxTJ-BFXiBTuIiAI(}7tԽ@QמfIٟɎRF<)p=h}s* 7R[PG(G"bQG F+w1Y1*0S&UUdogR{Yw>wv~ss",xLA(,0I~^&ߜ&y U>Ds6-ȽMG lhVq'V#Y1ϝջ m΢ JL3&y 2>Sq/^q/}s7./۟/$?S=/0գǟ͸ q8_>Mf+귧{Mwݕ婚w.j.E[ /o颸zPJ o -&C3C8]LSVs递j>ұ嗳i7#}jEkE\G(a7>r~h}0P?7?VlP*O,Fx&\5f>\}مDV$1!abm1H&o&1 6;}vd:;HS_13yD1%5^c;' R.eFݼ͏r<'ۍ[K?oc<@^U G<]~U||v%b<@P; 7Gw\3m``},յDM̆Wl+b0wR JL:(=nFEb8Mzh~r'ǣ u)W<6D6p{4HG!q`QcBZxƆ~[m䗗&` _9F0OX;p26;&!cH-Hj% t8;Ҏ?Gm (꾵PJ鱖Ds4=K~)G C?>qi4x4VT$/Wr)q&Ǘt@ȥ6>cZ2@h` R˾Cuޡ\ ~@$AiYJ ͼO8L$ DFo_J{,#рreqc*h%pWgqoR0#4'(xAcygS U/8]4ek>Ρk}=Sjii<߶c`srbvc>MSCo޽~`\m꬝l15I1]^ hYi2#ʔW4#AA5H<edZcdaYᢤQk)BQDQyC, =j(L# =E ޯE7ʗske=V؀bij h0ұ $Ü8")%:FMĎIic?Ul€}u_胷-L+?q]ͻifeSUXEΜD1cJ3O&f='TS >^-Ur2^d\P';s@C;5CRpFx7փ!rcJLqL{&h&nLn֧GcZ(\4Ӷ_c~GĖٔb+L"EZt:"/+ȑIT褷>uhd\um3T(!)RB(ЪhxUI$5D=7%F[/sȆ7Dɕ`ςNIV3^ KRsDJe|b Y X[D Yxo_aaߴ;Ug7%zB]DF6in;_,(X@}D}PiS8j5MR2C 0k42XI@uNTQw;|V(;QҤGͽLfV2@υ`ZJ=16X`H<@+lKPlڔJͷgpֺ¾7RNR`@1D1-h]b.:T*0??2 3$mS >* g[M<,'(h sg8RPkh4R&(d2*Dƌ ӱ4{uRjmJvbw =I euv^ՈG;YǙ)(VF#UR>ZevVq&i4Mm`Ʌr- *" 5>CA`N# Z:,>TKg;Lg]LzGV(gy.J=jD3iTؗNEZN8pz-$3 ׸8Sw6mOic$o;8,um6Į ӥXCD/W_u>D+3̂43F.`)-h9dReYʍ$1F-))Yj&%Y*E$Q {hM2G2{\M=_[!K=̕>F*ܼӯzH]adb}pKLi,(,`R``k+՞klCR%VɉCOFbT*zcjPӻ6ctkct/:?0>z3%Z-_ɻ9omwj{O%**QF&Q+r+!ZHOrY9Dc,Y;|ݞM1{P9z`̒ J_H(H5qeXM=U*հfP 倅'¥Delwf647yO]_O>__=gFlrdQP;4psf%@&EC4C}dn/ 4 sQpD lJbIM&ߎlsHg7b8]Ajq(jʨ67VO95/5dt1#e6ʌ`4򺠘WVìq@21CE aML,h[$'SFǬHVrqSX[~рcW~2"DܦfH o1YFi!sarQ jϐU DpUN7N $dr\`s>tȄة`3vjFϿ&ZjO8mեXg5-92.\z\!D+AʞΒ?s)R ͣbYy_""FA|Dr܀Ůa5 axx[f.}h;s}+=d$kK]^r4Wr5ͼfR4hnokT"(NwE Hk4Q8r85$CQvoKA#)?L8{zHj5VC<7ԊK\-JϿ}=1OI8"j:u?w]FM}}xy.=࡞=e|-c"~|-jjG胚[:Qfͧ? T*e dHN6^ h΄Wɪ8Py:ؤ˜ UBQ:D%p0;KkSP(uw),uL,(XiK%,'H ()4I2备"wvj&N^Hɺt>h[{V|$c @ք࿹sGB.( Zd,  ^1љ@ZTNK3%@j^so0xrѴN6d^U}᷃s&x1aƻt|VzFJpV7u9`eĉ*[ E;n*?bQmςʫہiiM*DzSI /ԊZvZr5dj*)V=+XUWclHzpAn{WE`m{WE\}+2:\)P5•\ {WE\d}+;EJzpS6F7 I WEJczp%Z ߒ u0a2.J%%!owۋv?06^ȧ2 7ZfހwX'I18yo鯤d^S^l\7niՔJ}?¬5zͅj,XXn&h'}+ڿr˿=f9%Ӕ& ,,L ?o潝_7%ګ|bI۫4ݔ廢8 W"e;+&䏌ч? գ?ĹM[HI1bτ@|/Vvo}ٹ Go_V]Կޭl1KwNpv]x{lv Lw6):9dRw*rM--:Zrn +#i:+RZ6pWŬ4l%0 k_v1>w.@zOqfo,֞7tknիpTݵ><\yYEsQ9A$l㕷4$lvң\s*(ۼG97q T J~i@t|Zn\B>pΒ} 8f(.Wl}[Z%bJKnC ͏%x/ws1/ftn]&hb~fa|7g g!ʑzt^ ׽̀b@c]ZipW:~;C'o?>z[\)2NCGGa%ϏoF Vҫ "lh؏Yu+(z5x+W2_ ~=RtnKY\xeeˣ%ҎO/Yi #4f9:Y{-e#7 ꕨ[5/u}[uT %5.5v1{my6#6'jԧv-*مuvaavS.6V>7{ƌ WWFtJ(;bo<lrʙ<̔Km0bRKvr74&ip`5bRKf.q[ωx D`=[\fn28ܸԂiˈ(YZ+ȑGq>`fMG(L)L% K>n+ѵ.[bO JtQ~s3\2N#_ǡ ]he[`_U[ɎNLOjǬ,aeX}0ehGm ˴4DujX*z.Wyp.G^J7^Jx)$-Ӥ\/x)BhW$7pUĕ*ju*RAW WR r#*Ui:\)V`g+RI#"3XU/pU=uኤ4zpEXŴ\\U˟$^zpJ\q7pErygHr{W$W}+X*J*ҞpUC5҆H`s0OBmH pUT3rUî7 $~UrȻzAp%\zytox V덇0nle3\b)Lnǵ '9`"{G(1kmIE@p"N0|I0gYԊO~eK$sc;dy]u0G'> ԞPv'IWR/*]]`Ehk*-tjt ])mS"uU۞VӦ$S:z:tTMU \[jOhiVGWm+g׳3nyR \]|F99ۏ: 3Zs s)J0.P &5DTNN}Ɂ|}a)I'm9T1UH5` a N> >svJKb8gsΎz(&Et|apj ]%6J:ztERNXUQ[ 2xJ( U("UB+OW T]=Ab1[DW8 .m6>vP2+%m05tUmUB)hGWO\`*ᶆBtPb+bEtE= ňZxJ(i73J#n]%=yW n ]ZAx*d]H> g W8 Ta(qt;:"ɴ ,BS[n4'm9N$ }ԭ?iqRdݙIL8#kQh >땿'ތgn|TgV@ktndofv0q՟?3~x瑈YSUqfJpӀh1Ĉ >*riH@:A&7fcz\ͳOo޼qob&}Q^}#Dh6SHC3KFD_AYͫ  @:u:_B-|ܗ*:T}*4o+RV ;Wsm k,e H׿;CJ?ӰORK6}לyU ՏeMns!`#p^<(L{Ƅ#&2ኳT-1g TtY&1+"d>\n]3A S\,Rg2DOK "KJdMDK9@vDðqdL'>|iTD80^>2^Dl~34ή-Vo+`}.~CQ)gh*+*ZWK3\sVndcڪ*nڌy~=,%9 -|[*-v8w_Y J|tuz;%IqR^8eDi414<tFǨVpSzkԑPJM(`/נ`[%_|WV j"?ygB' AZ| kHGD  )" c+Mb]kV:wIGBJAYIK˙eFHLi?m"qhr&lrTD%bK?b]c {'k< 5dq!HcӭN6oˡrKIDuZ`#u0Be$pQ&D+ 3{ FrbENskř&_wzv1O-%ik[Bԃw#qN4$[A(.b[,  DHD4QAJpDX*"izV=ԫ5+5KPХ}TZmg#.9V;ǜHk :$̋LJ̥Eҝ ԩM<}bA`B`),r+eAHN`nHI80i,ӎ&LL-7t~R'-w̋<~`MΰPg3JMv<3?O~<2 %:0FȬ263إ*NkAu툣(d:z mlU4Edbmm,YyWrFpޮdo3L;1^y.:q ba޺q;+;m1"ئ㐥MES69VhGʷzhwI,+q_AzbZ=ޝ6q,-37%x٪){w"SOjUѹ.Sj:_هՑ]lU:Ev!Dgaٴӫ ]p?M?}Ծ?M?Mkʟ~X2HŠT HET ke^FB"qͥa0D1z2(:o;cƌL"^ˈ&Z܊ Rs68G0I3nn>o='rٖ۰df|n7siŢ׶ɶv p%:0{OQxѤskn. PFFISUq0?!.۟Jgһ9,tY|_JfP2Kwj^{\)[yҪk㹅V_Xclr2_rɉk41Ӎ<CCܫ[盟7?6*˝h>cnR.+!.b6],*3؆p:Ozq:~BRq>kx4Jbd O%"hVJ`P^see<0O2׍$w+ !3G (4* r_y+9-Riy>ڴ~Xoӆ(}pH}2~Ad/AI6oN1;i\jKJr#!h0BXIpU`9T&ա;?f~mbm\n5>{ޖț*{%uҲ\Y"½$}{5y|sU曔GTt0m+ү緋j~S&Ut2!iL L'y Kl76|_5￿iY7g%i&~w?\~z7=ڴ|/xw_Gߟ~ dhr?sƊcMOu:}߿^yFxt?Up_t5HZU/3J%^٧O5t~ڵQ{CY` `_@?x3 3~Ftb>\0F+)am҄d58{Rm C?C6k[nt,vS;3,f=Zgy=Mi`[$.,wAw6ږ>n<AV).xI5<~{:MZGƔ`^>2#]M8gzW\J7?|?NsPԅNN뙍o'teӿ @.OS_w_T_$L|ƒMO?,|@lJ+WuVB%ߝ~zR[Gy}Xƿ_=3k@hZuUo'X/h2[?FՃ:r<뛄5h_k':2s4L@P֦5rC*~]ruﺇ^Uٛ39{zy"^@B|^Oy2no(3Bd YxCFa]V\?׆t0[RڞRHv28$FUO:JV.Z|z4~ H[3WMt Tկy=>G|ۓEXp*[ft~T::/w ux>܇T7_mT}R^sx+<" ɧJ9_jTY@\ P) =4-Z= <,B YH}">S(Je]vdxg>7>*Nlڄq80b{k5\b 0x<` >wů-Yo ð/?_-!^_OEgg[ l@\?SSl'.>7FVUϔgp2KV2٪㬟/.y~E~`bx㼡NdCo yu#ty倁]LG/~,/V;wco&o=o> ^jdцsi6/_`_blѪ+4c kZԓg˳r3-x>˅(Lsg'k b~) bj5X/rB%;( ppQ`=[m$K^&2]܆@btq)H xt1R:0VEeyo7w~_˓Փ>>7LQ:x`TtHMTY؈XZEex.EPdDk =.PWDu?`~F MЈqet=Mi9w=oۿĿJdLyE lG6. TwT7hTGUq$[;>4nsoX'Xwӫw܈^v;r,*۲z\ iKY(L^t5сׁ@r$1B$O\A'S!5xڑ<W^QޒԜl71tM 苶S2]ulДUA8]vԁ~l €V`ldQrxL6\N f(FZQ c\LJ hRJB_@'iMj%(sHfIIt3 W ԉxDNJc}3q?@uLJݷ:Rn4[Rf_/>^Et|yՈJ J&P팰Cԅb'%%g)S~8 o#YEAC;2x)*pJ(WQCȚAVm{*S㮗QJDX7Xz%В%3$œl;s7\ƳQ-GpFJeQ:Հ4F۠RgIh +X|b#եur5j",&gk5)r?WxSΖXB@^7H)7WC-ǘRF#8M);=A9_ZDm4`1616˱Ƴ1ڳ+q4KY:/bc1#:D;.tdPقT5{}^p@mB[+2(aK>SRs!R*3)B1-o=UϚ(AkȒB礌)9/|2-*+z E$-s@{sx +Z[fCXVdzex|,VoGߨŸ'=:2Vx{ttxrZgŕTsPs@a~D@"x(H '!Xj*1R(*?`QZ-RkFX2Dm!h-k=YDњ))X JDI;bQD+Jlf Z!A =J7ҭ7{;Y݀?<Y#'<i}tHf.ۣZWk #!Q6 RB{g;!ƂPPgG) rns6A(A"2`sLVyOH A#TD&4he*6AlKUTvSl޷xh߿ %&Fj}p| 'e?!~)fY1ve2yzG g5ƛw5Js8ي~~<l̛X s`8UPhi@pR065ȟNA:&!bDukaZdvy)ķ랫%ԘnncU==8z)1&F[G %ayH2BRrLaMR… \ʒʕJh +BܤV=Y?8PzwD)zFy #1KvVWv iAhQݢg̼ %fHAuF(l;\rjuFB#BN(b@P e(ETh(PtZo2M)ڸUhD_sșrQ!Kb ٪"RH(IX)Lc8L{[M߼f 7%"ZsŦi@1biǦV[7z#M7( IrF֢GBJ"FXk'6Z#)RWZξ=,i̐Jd![5)hTf$S R,+lC5fm{RY P,b3"ьqi<9E6Yi"XmPIIʐ}RQC("s!'UQ#P0ljWPh!XQQl]8w[/2jb8kHfZ]thoOi `*@(~bO;AR:..6ӎC=܁ gbCqS4;Ob)AG8я ~_a>wxi-Yo .;`o~ajoѯj5o%TQdz6z>ei|o~y;iܭ[Ią\|㢗`U&#~p8xW#{/A* %9zC/|bbY nǹw766-ju_y/ʻ=Nۇ]e}}z7o#j_K4\.EgZY+oIDٚ"ǔ\b[;  ףK`RL} I4PlEY R1X,,b~O0vufcHӝBͺ\ Im Tax"vMl*iI6GBA04ܘhX+"xסւQd FDU4iڶcK189I Ezr^{w}I5K1璌uXii\YMPR\WM*B@N",=zX Bqd06cF4Cc6.olʦHS-"wDF0S ^ޙeɺچdҩhAhXwj%cM :F0T`̿O +Y6ּD6; oJ Q"'@h H~/Y&+/>fm2[ed<%C|J7Ls9AZTM)1ڄYTE)Qj6x՜ɵS4Jg/w:r~L>AsCjus9oP#dUB`-H1*2X %EDmT$З d-WW ڸT"Uh,I;mGl=">$rUI z[lVJZ!sc`$IHX*Ht=n%Ġ:*D{I.%醺0?"z[(ɐVL.rL$ EA)V TTPtB[DBzh>jvaPPVF %JR- Xe\]y b±D6r)\NC`:*k'*qc֡RR9ȡm2" YJC.((R Ze#@Pl<)/cm'z΍U"\ "U+V֤(l`a* \d<$ y(R_i𯜠2Hᱸ& C@z;5(!Ȯ`Y#w\҅P!ՠ|ͩ~0+1l aA0zL eP@dBEsih]8uB>7[S2ƒQ ݄Q>TKQ 0*IΡ88)$%@ aS9(q+XCei!]7+@Txo*>æ;+E@( /I,)yD),d~C_`HX޸+PE.Qv J2Hk..'-Yd^>կ*$$E'V4<( e͘`#d5. DE `|AwXk\Ba!p# eACńTE~ǬQ@q|1FPTPE.NZ/p~ع}ge>Ze}~{UЬ+Y-o|62 b:PT8xiq:@I rthAhtW sc ,w(v5A|Ѩ`a9=%EHiD0iPyMF>% Պ. %~OyL$ /uC;7C6Tm(},\d!*T?Q<{SbyǪ¶M fB(NA5.}w_nR/UE-`$d_ ]{ #Azu%mV /F90(TJrp]/ 7 Epi6n51 j3`Q'$F8ZN6Gv, ]C2b@ YJK]jF(K&gVKK< $o@Gc2  tF% Ir py Z{xwGDMπ0~T}MEjY"JNP,eZJf@3`uC:jϢ;k4Q \#YA pf9vڀJj*x/m,e ԿQn$,  \z{`?R ]+ܽgcګys$ryv`0utqtFf=*k (n> UN2QG[c6YkA#y(Y- 4zDx31#@99hp}QeFhф}D5H%TP2tyZaŎr9VFیJv0  )aPg'RnC},[.3X@|oo7`E]1("NS:P&j 4r?M^/Q^0 b^SL 1exR%f$dj<\ā U @*cK6*\T6c@sZj8k327)Vj $cZ Bժ%6?3#ɐ+U9[sOkuVΏ{ʷҌ>@fP[,z!@ DUFMR3vLZphԋaBj4ASQAz|dNУzO^zp8eJE;n &h NCa޳5fc)[pUj]CC,2cRP]$pe. 踒#P5av❀Rp)e?`j?FvSDv.nZ۩ۙA5.57%.S픳0~Ͳ%ĥ,%l*VL=_ ,~sq[I`X|} rM7ŶI/oO^O~;ѻ'WWkz}qu6R> ͯ9֧juϴ>:58 \^͚@}wV@ 7֡@B': N t@B': N t@B': N t@B': N t@B': N t@B': v@IRK]KsZ+zXtjt@B': N t@B': N t@B': N t@B': N t@B': N t@B': e&''gqZ5:D')өB': N t@B': N t@B': N t@B': N t@B': N t@B': N tnj=z~;z˨R\HRyQ_f0l2.`a1.\1.Zc{o\2Rt,ްqVճI\EtUU ]\Id-tUrw*(%C:@fZ]TCWVU'tUPR"] -'ѕ(O .e*h;]׳=DjyEtUM-th%UAɐ4!Z `#ꙻ*pU5+hmՕ.p@:D*IZWDWU \Mk+@_z&tUP:HDY*+K4aUkH-th)1} ѕyb՛㢫{0w`w .J 2+]T T]+ۋs;No;/t !r %/?h2p7P9׿ 5G $p{leɀ'T]zrvתbv6l J~4uWm;tz1@)?rP1_#p﮳ڗ}0=Yn )|j&^E»g-񲅁<Ӳ PMͼ7Fz!r6'q}w`a~/\uQ( |s4tv~ˎ&_t7l|o4sgUc]My10!,{&bCϪlo4jWYmStmn6RAi,GM24Hcl"*YM9|l1DjY$ʋDr &b?dJAf &:k ZfatvB`{#j*h_ PN2< DzEtUL-tUZw$l!] ] j56*pi5tUЊUA)$/2)]TCW3wU2w*(9]"])V*p ZNW]J3ck \#k+,tUPv&0 gRDWXz \Ck+@i+ˍ5=ܾ+nWtp)eUA+z Jtv>1|3PuնQKo2Kh1dp3/vaQvn9Yr#/Nclq@cXV}&+*+L \Q ] Jk0aD]1a5*h;]J#] ]q UMt%#FWCW5L JA*p5 Z*+)e3WCWft(%`J)z۽`y2XZU ]Z`HWHWZ k+,$ }/вڗ|>]JIq( \Ce-tUrw*(;HWCWVYmjh-tU֓wUjw*( ]0;Ӗ[1V(]"lRnAiǀZ6+ w]lX#kYb桂}w`R2,1YͪÚ-U:YeqFs*n 0c{{6D!J/%ObtMS)QUl+=]F ] ]1m57+6gWfh힖)˓#] ]qCWCW.F] +Q`6wi曡T Jf%JZʤ.'*h;] ʊ ST;ZBWefv@A:@% `KQO0XjUAkh骠8wute &uUķ\Ai-tUвOۡ+ĪDŽp#Ȱ཯ֹ!\M 4wJ۳+]Q1ۙ蟉Q5l@1,&dķAzR; 6jLu{[TezwVi IHȒ(mD&\d, <91H#cE Yw 44M:qmAûJJ]_?_ MaMO{<>O>g38g ͮm ޷wO۶26zb?~nKnYr# 4 Nz/i7OsrM)xFӓֲDL;w %T*<a~Osݪs7 x>H?sp=W[.b|<X9? tw ~Vq[y|NWGpB7<3u'\ ~&u~kK|oq2ȯvb8f<[]H!~-o ΍oyr+Mv1jEwf8xc= i9PoQ{ ޗt{~jOs\E_s&˓q8o.1iӔ >=uL&"e;-i) ) 8]VpV;JEsSu֭qm:7]dytu;+uUv,kr<\[ˆtuVz.Ҫ$7lh%(Ἴ3p1].;:?2t'@?OOnlm ay;T;?k(ZM1-w]h9D)c!@p+נ;])Jg.LRysy )e:ZvL9LS zonI_-2A᎘- nܣq0Iq >pXb/ɟǟ2pY[,⳯+PSJeyяp_DtprB>&X- Jh?}Kɴh6vv&gkwhpt%t7BNdivt z Jؖt4;j+q|qqb&~z@_h}>e gI[75. .6j8L4sH1vy؅-BW ]8[{pa+,ILuD29k,LVVĨEPgE5pb9gr&l2RnIg'tD}2@MxWuGᄌr•Ay~3'9/#:jQ:H4!?s)'6jKA)ύؽ>l(1LOe\(QqtEQv2#At9nzٺ%֛f._"ц57F RBK@Ȗ%KR!]#b&7,)/4 s!+x{$dE? "h½duyn6Q4/({87.Dv*d_lB{pv1/A+E:;;k~P,2>L.n'"BlEtL3Jg/HJ *Y`sse>S 1 2)'Za>4gLIǠx="Yvv܉wNR]Ë)(f@g.)Ǔs#M!Vu:D]迵!͘ Q`캟ؕU캬2qF/چFʏ]dq@ÊXgL#O5^ )LS='ϋ@гglu}1q]I01HV4lIJCE"lv*qc(GcM: 3i୍c!Is yzrȒluߵO8_9[8Q" '3V iv$#}#M+ +Z ZW;7C,zE+J־|0wv>+ݴqC4ip$w1jDaw5$çWxa1{N_+W[ "kj>'IKp(& Y*62Fp\JZ޹R$YJ_͂,BFcBfki ϻ#,ԧm-҃o2IZ pO{&7 fA#=Ej⏻>q4@CI啽څԡy9܁ Im# aKb >J4ABܭJ{'?y'[U$r* "iFiCJ`daLq1OJ5޽!^ES[%GȢHhxV2Dz6V'e1xRV W \™d%&0+ 3wЩr n=xgp7Џ', Uۿ)Ny1EQ>W`dR3EKuO^/AI;ow ꦫN8g O%W{R$6 mBBZs(^&}I1Vpd"`1K.zR,U fQv7 s>HFjGzJ5,b!TBcIpEQuьGli ^NvO;8]|@y>2b [*Gfj !3+JK $ 3ifuX :+3c(ƞGQ :T`S!h(~h2v̺,%p:ǔY!Atُa<5+];EmUUڽ`5φ="QȬ18Ъ̸FC9TrU0+m| Vq 4CdEG a &Hd-)JFu2p5rak/˕` "VcQWFD#b8^ 3 ` S ́6J } ʣ:p&eШ"+G86N $w$r\a" F&[#EP+C-r#EWB%⢭wFtz_4ḀjabYP#s"נƀL')p+xX;8<<lq:K5r7Yl?0>яOhS=Vn?rWb:ri43 cùeA2Hw7oe8HSJ9' xD2d-S(ӁYФxYE0!x4Zhq5^ƏkۡpE.jV>+\FSl3'q[lKSy݋1Y*DtV`1פgM JVY/t[$:tRY QCo d.B4ҘrtĀam9WAzI鎤BR@* %m1R<)OľR68z mPi\K%3^ļ26D"GM "XaJ?$ d>v>8WEr2F38uIƄ,'nIT•dUt%W 늢VGKc2biz됌MR,/6mw8zwӸa8]G2#zėqjJ/R|YԷ@4Y# x2sSzUQ>!3jp)h.z@ܢjrv1+ڂ^ #uVreuןRu\smSnlri~]mܟ/CJ_qYppsF+K ڝW[k.^2z;ߌl7lZyo~Ooxur۽N,Gv 9k8T])t+ZSmOvŶ״H 񌺛ܲg݌L=fZ]nVԶn !O`.<9JiM5:V;4dDa㌂&Z+JNWe؈s^J#ló(p#[{;|֛}w̾33,D 0kR<ڼ_*IXm&HMO)Uiu,ֱZǺ@eΆABZ%8Q 6ӑs)(*٬@:"I!Dz5V8NvBPRhgJEPE$G)ca@8!FVD,ap`Qd{lxox)>* ZdV\;&E*MK!5y06^xB{B- Ral}ȞX a<70&pP'gNJ)Tds0 =7B,I|RXLu%HZ^DJO"ƒA* qH6( I$ȳQ訴1*%%ӤIдKgn8D I,ZmF[.DN͑!yY2*Xˣ3" aTڬ[L`3YhI\de[WH5r>U@҅E)dbZ. gR`&z}B 5NvZoq^XN!,W}ғ5I֡J V$DΎ Mڲ(0h@&:$]HHcثUԿ/ߐ  :MY]WʎBnmsO3qux!Т[cIA ~:URՕM5̹& UJ`#ssIQ""ze4%$CD23W%Zpk%Ff~?ZS>VOb&|VhDT[0sSr|m<p`^;_cQvα?c#y)?A>C9="$u>rM.-a*C'Rl 3% :dR H#Ug3HA RL>K4NЙ [d:2}oَjHZH?|l.yT8R>awBjp 3Z'ql߃/+~?8tpWX["D|/!&Eocs+e@' Nl%`P=0`*Ƞ $M *nBLmJYHae,b0xfYU`xuQFC#]s}硼SW2ݖ4bv˳HdJ].+dh%r-Q&-)0Q*mdѸxx2zBO<_$,߂1KJR4›Tj9)$2tVs G*Sud+{q 2$:XIoŒà*Ac]`Q DZ襾wjVX7U-2{QCq(>>I)m].#'Tċ/ݟN,dp7.H`@1bXLe)QĘ ªb$(3*>Ə)Q=m&\fC;|_4%H֕?2+3m.L Av0ŹXwmI[5쯇otoӒ:rW?|翑˟?,m-0 z E*ϖl<~poɾ]ͭ dlL؀QccUfbx{Olxu{:}_c:Э:Oj<9?{{9Z~vo}V~Z^o?^ͦA8[({w5÷ߕsWo矗C~Nﮆ%ty b VrqE- (HZ͵F2Mͯ?2mjВv,Rr8 `L%0f>=_NvKtZM+WǓkFW_5~]$mJAN;O+Dk8mCߕO)YaӜQ{A|F텻Kjm3y t-ŸFtޏm[Wc~5bk?W?!GAIUwG.hrw%bsM2=1Mw<))'&aaHɴ]9wd<=X <1IeޖtP h t˅?"8 I)>w:~uj!3BH` Ij/͓b.o}6(}۬qnCP319 N4 F']v^ b@[[5[iZqZ+55-:̂4R䤥 UVSB:(FzT 46`{db5^rβ^#K K2ahh !(M "/I<(iɨ%.[Be6Y5:=.6QprW%6#_?9YTz|mdӏZf&[?zhp6El~g۱&2?RuӦ _=39ͩy/diU ݌ dQs4phpRu< {ٳ>Ҙސ-ɛ֛S'͹` DfE ;FZ ? JT+/^研mn=bhaYsZseP [m 4m],M3>eyPGm}1:ylL~Sh 0/1`9=z{KL"UJtQrAKVq &/>ܹ`\ʊr![Eo U%4"NʋiŭL[' "fd&Bc7  MҠ)Pt!s=AAܱ;G Rg8={|*T~O{W}2 P* bQJ結i*!U=𲊥7 82c=Z`zzlIۙe,JЅ AJ /,AbBDzph[ pEŏgؚ@%q4-ņ $6E!*ei#M?(whrlgS)RsBL:–JN0DBƁ1\_9Vnllod߂Nl=w5b`Rs-!jH1B1ɲ]}th%T(m Ҳ UUԼPPT|<җE0ck;j0 JVJ;<#7]J0ՊT J=̃\B0QX">RR"Uc8^9cyLZcYC[u)^WMy !y}. gR;vQ>l\ `6+zyn81^fgrUR DJŜYTxYY_Bm~~z`[QokP.xWV ~TJW&7io:s-Te)miLh= ڔpt Ș*9e2?ُճl44X97𜹌SUon(o9b1IgLw+EVUvS94L#\_mf&Ac]/ʰ| ) Z|PMK̀\rlclʙX7bywߕNg*OTr336VOu}AP+gHLMbŒACg%kP@Ԛg&it~L U,Rӹe>h* `eNOlWsD:,r]ۤgm6*UCC\TpyE!`}T-K3Q|H3Uu"XY0 3Sy)+]mxSr`\z)(X0L/JJ/j5^ c/{)$T]'dpEr+R;|\J+F\ $hRHI-z.Cj TP=\SlB)J9+ OW(XtfIѩ Ccgqew•aq HvԺ UrF\ ,Sƥ+lM:3(ױd:VTqq嘳*v[qErU2ur1G\}6V0.6㪓`^MnߋfTjp%G\=(OfMFtSc{u(7pEb Ê2T@fu9:4o>(?:mJxB^(r̝Q yM9)!+K }OwS} R9`xC[pyB]"+Zㆎ+R숫ĕҸFpP H߻"f) ޏ>H2}Mm_I.;T:A\)a J l:cW(A7+R | JKuJBO=X"*\Z+RSĕ&5v?y<qef¦+T1+R cgqeR&+lHK&ZWO+xp)uRM: (W3vEjZ9M zngPc]&i9ݡޘwc =X`f̋?靪S >Vg*[ptkf\]3*`NE.HnB_wRzg0Z 0H: Pv":A\ Ř`  HLW Ztwu  HfԚwIS( %; P~@G߻"֎:A\)eՓ.,P&\\eS5+RĕւٔpH3Hr! Uz"zNWF+!R d oMK3j ޻"F\ R 6]\Su0t\G\jIul2BOz$\Z1+R9T#+u`ի jzpdO9*710+.PnZkmR٠E.a㢱7h` `ٷM.,M)Q@7f`FUύy׏ V$++\*"+Ri` JX!IW(X>MɅdpEj+R78tp%2p$r H}Mquk ; PdԊ{WR:E\)ǵIiɕ HT-{ՓJ; v*vkԎ WRW'+,p"X"&vTgIqurHNgZ Pe J!F\}6V0?3I}{W>n*Iwqz?EU\9qN {t(7؝o`ԩ`j/.m0{KW%r%ɝ]:r!ndݞ?w R:5ppP HHncԺ83lo׫wv:3tHf׶xEvTsFۏj+:Zd*T9[}gYQМjZqu1ېJ 'T `M0LvwNS싲"; /Ir}ӠcM~/[9v-ӿM;;l)4wR|5 #&(PRk}5RF_}5#8Qb*\\= tqu:ԛ<0 Ԧ"\Z={RiĈĕ[PDq\Ԫ{WRÁUo&LޝCT؉Vdy& }ȮܾtSzJEӁf)m/=s̞WƋhK]K]KrO]KjT*9kO]KCwH Z}B7C˘9ip42%\i oMnߛSuSq*]nu.KԎVJd1ϲ:hCY7o7ً_^xkZohs_WXA5>,~4>هGCc!uE_|F|YxSjw疱 W}||[y,Wh, 펾;D `W_b;ccwj%oI.qWc}ymZm[z)Mߢ)nk>iͻ[`VݝgFr@QTb@ܨyrO; f5x|oߨ̭T7 zO+M7+T1/eq Q6}/[K)"+\~ yU^~e;7ֳ?\oZIZ뻥YboHxY\Emע`JA`Xp^IQ+5LI T*-Ae0Ngj8_u=TzUPn{.v8SBhK7ArELkxbYUevW gzLB*9SqZmdci/v&ud 'JO[4a2fʝCeJ\)\<&.95e TM&DyUr7,F&Y gf2kɮP#Wbjc Yo!jiZE0PRaa/Ø3$JˡKPKcf2Nk#1gh1cn\mv^<)VK1;G3 | 쳷)LQ.Wk NGFXC5َ0Xub0;SSm`+}ijsĘchoMS#%CsD>243IqC6Sid[pld+%s-!Z}nq807eĘ?7!FlF>^ZC h%Pjr23FT5g@جcK9ɇ-SݒDBe8_ȡT-~\[ d6SfX`95XX -DmMw/=*$7_8Uˆ}ؐl!z6/~Hnվ\k.yӹ$jCo4q.ٺiF;Ƭ\9L,}t:dhZhPe,>]z0Ó5/+UvK`S'׀)빢<=9h-ķfˊ#qj$(Q_|=@VCT% 0Q±64.Vb#$l SFXΎGe7R`v(`M`4uk6>k5zm*PP+,%x_bN&؉yok m(WJ%*#79P& 41&&X(%'atJoBeF|3c9lA3YK.A@&w#w(!Ȯa@Jݣ7CAwjGd3E#eXQ,W̦hNOcE5&ۉFr(` 6v> dvVy˚R2|Tް9 L09LDŽ1>"*}PLdESg*ֺ ؤEtX"z/ W ]QrB) >@ &={t^۰`T#./5ZzπyILP0ɚ8n]Gm!B@8>WUS]ߚޒ}(7[eyAtT2X!?&?\]뇀կ]BlÔ6J4`$9to8 ,XAX;KRsX9XQn9wK84#n _I 0ePVB̥ܶzZ1FcGY$PR(иW%^D[aU @9jtEK<,b9$(@"rQ:׈7r2XtY$ fzd 5\ u"N7`;<>æUeXJ)hz3 ZNܝoj-$4=> 3iaF,ZM3ڀJ}~R[o ^p"XHV @a8/䐪t ȍG:X2Wqo^`r?ܦř H B=0xB+fFfM|]= ޻V,VZ{%eBhY04fx31 +@*Mʌ<͝\ÁwæD9NHzɵ5CGnM.SA C걀޶\ѥHςO(Y<#׷n3c `?V&A%>' 7⢑)z7qCw,NTCaآ:|31rT],kP~x"mL{c6# 8 m8:װsJM@xUTAdP-rd0o!&sp ΄RmTc]Zg6:B_T}ڔ\>@-r-@1;wuU dAewm-{a-B9hY`}!sZ)Ah@.iUi1k˩c(Lq"XЭxy*…0y,B\fK/9pUp4e ZN6tj4&j~C\Y.X,vSV[ ]\ v> q]q;Sd0B1Ec6凣^ry,n? x6$({ AmzsZ|Ҍx7zӟbbs٧Kkv`X`ѳ/_=vxݷoA.=o+9Qqw?WR˻_^=DAoywqߛ;wwo/,2Ckyc7o!s+twu>\_>bs0G770/nru1z};aOG 5ebrb?_XHU$gxK4`PZRL(ɇ]|HP&Mc(c5 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@:$%o#p6I9C$ugr6jH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 tI G$A=$m' O> 1k@D0MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&~I or-%N%$'2&1 ($&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I Mi|@>Zz}ݵ5Q׏06F?wk7oIqK"vKfK@N?ipK KW8ಉ[+AkӕtI *Җ +VJt%(#+]!]ho p] \ϳJPrP:C< 9f pVJЦ|t9*]#]%9%i;n[+%:kW*]#]e ;7CWnΠJPtK^ɝ'E2p!u.z(įP?>9gr8cr?s$L¸v5aO7/Lk^Qv$Oq{wWCnu7?9s /^W(C7,?v7ؠoz꧷W( iܽl^@{_sFfjooƻ`}k_/4~Yúyu%(s+乼+q;t%pS ]-ӕtJWHW>q !!m.o ڗ:yhΑB6Τ ѕv3t%py3JWW@W:C(ݎBW6S+AR9g K?n ] Z>y*]!]%y[ι+kVJІ7c=NW>x}kWG׾к2tu'!= GUV쥷) 5#z+C|GSpœbX6A>bP>l޼Gl?j\41;jv[J/nHc&Qpqgi Aܩk Aj3nCteM0BW6S+A$ޣtu>t 7DWon[+dNU:G&G7DW8ЕMQW@tfPP:tutf U g^1*n+t%hJ2z*R4fKf+7s;_{>]EQ]1%!di3t%p ] ZJPtutlt~K׮weH}d2YvKOϲP':,:,9mʉFLL?~ UWH֞I)sDΎPĕGwUv*Rv ?\q씕de{h!8?Q1`%_۽Kv>ŵ%x|;Jm 0azZ齺a9ڼ-I؂-?Oj׮1m>OjSc~yvNtםGrn=kP9Oj(J! RX2v+Hmg|@+#F\Q\WEڷNxz")Q> \ |${ઈ+ձUV\4WRj~DpECQ\c"j1X4W l,8EnrU> \)ɔ>&cև>`Gq=*Jv*Rj•("a'%3x4pE3WEQJG+#Rd 1XĵG20zI \}@ P#+1*QW$-pER 28p/z8eqڻ}R.#fP?<`@4G5٠SovY.|rE=>ϮFϛ]7bKkM]У.CbZf6mƍ轜S 2>{Lk aUTP5&gGM/&ܛFBXZyĔFd%NL9+cXNit 2{U(K¹$Йd%g\% kx7&gx4Nx?;5=/PӢ X~ ʜWGMK-Ն꾺 -}`]7;\N}/ cvzuO zɯ uOMax~26zpnP+Ωƨ<-Z:%0 ؔG(< rɴd{=W [ύBFn)m379^}Qc ~seZ$Ǽ09%Ezީ̽RpM`5;P2m9ʃmH2A%O19 $4H4R^kfB1qQՈ4|oFzZuuW Pz%=e^$eiIL 2\omƴ-xh7ETIȭX?:b= >P.~FѰOjVٚ4aQp9J1 *K1T\'rZAMкMܶ'Hp/YGŒPAS6p3KN$Iq˃.IP5e L ͽ".VhBrL&$STD$yz\^_++v. (4R'Lsh8 D-ݳ_b42: b SDAYD o:c4 d! H 8gQ^C X"0B&VGQ[Ѳشؘ8/}NAҞPk!sD %>ɖsتe4VJ&ώ[RQz~稏Yiɣ&^oZ(^rңgjWxɱcGbGҙoƎzW>-.(R-*n1_yy2T%z %E{S"n:g]x-}Nđ\qFKp(.Y*IW)eR"!%p! $R,K ی2f(g`聒-jM;jni}Xq*(Ձ(FQ<$esOSG);#Cƾf9lf7߀嗲DA2L U )Y;˯ >=lM@(@U.t%xamEc1 bD29+g\j+e9DgB>c'L fuC7yRFsoM燃W$/{VcO~I\Y̙a dJPX#:zKVMvmdΌdYl֚:({M"[vu`^SX# ~۹c18 aƗ4M^!&B Uu1Wd]#θ1V*Wǻ|Enkfd10-(li]Q-9fFGzdL WiI~<S9 dNz֑7 \oyֽ{9|V#fͺgIUfLL>\lRj7{W6"xfa 7f t%(%U7PGgU<"%YlS+IH;Y8vHR0KL AۃGLfz64O3I xkjH+:4kk:(eE \wa|vm@q?K\j 7l_^(u2{|=?W~דvW+}]x kIP粆߾{=M_-Eh^b)̥#.&FFxud> y^}dɅQgpͯ?qoLuh2;;~s>^~vɻ=-ON:Yy0os.˟ZW݇d%'h)^9|oݐt[)1rI -.F:YcrZ8emG3N{&Gz,J_a- ' YK\Ӌ唔=GLȕ8O+}磯Q5r HRE~ 71Xlu$+{$+_7ޅ7$炑hK³3:?|)}1ޘ5s'@/9(L[dK#fYElLR;\jsoՄ-ug0l\s}iV=h>0Ia|^qqjGfe 3"į61U= Fr˅D{z[NAGkjuuxڊ}X 09u6K}"DJ-R),VEgΧò@?"` _yJw"SLDn=g$h@ *U\h@K2\Z6 V(E!|VViSmto7d}HVzm({ąUwt(I}ߏjm:@I˦}}3ɂyDF?"K$UDZ8ݼ!wEӁM΋Ϋwvq* LQ!R4&˨޲"^ij)|Q*n* eL+0%. >M]J0 3SeR11i57hI=sR/`n%hLJh 7ݻ]y~%Noힺs7µ 0:) Iq}XcJXɸOs%(dkAs&i;dL1\|ჲ[f-xb10{Wô:kCXhJ5HDr p%@ 5qǚq85u⇙QºK\: 3z3J'% `{rTNd6- \9v`GarL^,2M\$MbZ<񁤠B ъg4p^҃ +Hw y;o\PxZd  wYEB98taDɨ]xsIwy\ɥj@hh."W9WK6eu~6 Uؤ շVs2r:4ni:BqqZsT𸥢c&]-]&7>kT•ϛy{7LPoXtRǽ&e3F)Bca11rE Yhh0{ .OAϗ] KpD 1i-=^Uon-33xFB9aGSrt甇#WU.94e{mIN $K6Fr #%Yav۝MeJ,{0| 9jVwf}N:^^:Nm p:dRVؘz]w܌â99q=D:M 38FϥN:xG>[ԡk4F#Qܯ;k)zǍ2Lz WY!KyP!hA>H0vӘ`p/qH0xn=`mKp6pQ"Գࣔ ,%K!` ?$ nuP5 j{t`If@)؟hD($= D{Qpp5I Bd%2@ IX ^89\p/rv9ԡCrFg|ád}m*7$ˢQUeb<8:7R9k&Ú&pOWYKۛH)Z m0QG*dyAQ%iZS]$=@"vHCO?2cvGrެO) d[Z3I4dt^\H&P7棃t𓮅͇XFgT4[6Xxb1 Zkk#v8zQZ,'>Ų`cǝP>(lV<䠨.IEN*V4pA7Azmf*EWCX{eJ{*Elw*1ʅ׉j XWIQH0.yP&>1)K-.6I '14XIBB!4΂$Jy)\:6&ȠF, s[9H,FDF\BϚN9L1NjViuY XPfkeh$nQ61Ld2$0ǴHfd`#`!ƹ a\0CN C[8L~] uq-C`sJjKVQe)hqv>. 0W@k Y7n*7IC=yU/G.E" ~>x{.f~2\>,b Fc$*`%(Zj;F>;ONdi|y0:M]G6 _󐗏i"ؕ񟁱@j:)7g?kL~yQ݋Fsٸj \OmES)a6? r-I&oA/ϊ_b1ͯh.H rdcCL)( ԕ^.dMra=%/ӄ|6rU2Y&!X*Aq\3cY*G PTΦߍ:%b6>./eՙL]rgWѴ祆smCjkz^x첮Ѻa1i>Mw{kzc-ukس5;nSOz΍ippgkDn%7^1-bߘȀ%ޕkvft뾾i~d}]v`'o=f^fɖ9Wo q78:p+8r[~7tD|vs>ǟpV;`qs;)hVw߼ءTqng{p 6KmKF],պt22z我Rrt68']Ipҽx1jEgxVTJE%"gf.#ժ|@tD}2@M*S(bNYH瞝zyHw.儴r? V#͊FcTUSl;L]0ͺ*9*Exuv}ÿȍSZ)!B]^Q_1gI d+E̤4%n@@V  >4 +d0 ezH_!jy0fLcgٗiyȂ)R&u=C,")׃ ,EeeD~_d%cYgåUFM`PE ^ԒT SA@V@BI@PJ]K1,qHQLF Bz di}hGOej?\^ Ȳ-oVO~g?8w<A-EH܊{RL dnj@  ,p%0R>`Ĝ!F0)ȣ`΁0R(R$)tAC))r@TVjlV96WE;0e8_ 6݊]SqG.'4E.sI^V⮂W61+0UZ] xf416Q- eꞘ*+Koy{Td7o_pɍ{cUy>vlY}>@b'b0.&'2f )ST 肷:~ڒr>2 ڎ8;"B|۱gq9"ǞyJRjλo0y^.Z/˴HI˒>l>+ΑozؐU?ޡ-/&i|!#}Ow>年{je(&eQL./xPdV"ۭ]z 'ovͰ#gywi_/ֿ| I؝&nпHn;@!4ߝo|Q]nepr1;×ܴjxR~ϗZ#ʚ2VM Qn8NAaG]j8qC żg9@_+6&9[ldV>q0$}2bҡ`vlxIuuǏ;yMC#JG\4)D#è9z`2sE.Dp,G TMॻP`HY *[SUbK-qo>|+MMJI!n|t@S+]𞶘i~$]مƊi%:ٕnJ7iF Ne`GyɳJYΜ5w)VdTAӳ%ӽdgo$YGwS!|a]1fh9x1޼*>|T/IgLBN+"HE$F+V$S!cIob@S!82Z:9ε#|mrL 1B'ʁ\M$zz\7U矝~uE>)&WYYZ)(!b')ZT"E{h.R[©}(q8|-=qs @/oK1)B kؔK '8r;bR(_,HI3cDK-FY`xYJe$Jk]Ru }LTH\_ӡ—˛?=SQˊt3#361UpC{,Pk0 )l`U8F"eVϊb"IIJZ \h VgQ1I,:A?W?jER?cT7㍨䋔e}4oT}7b:pVy@ Ԇhu93\ k)"9(6q\xwȰnawmM1#2Ϥ#cuc~@PȞU&8Ț$5h2q;f]8C,<>?=淖8-6qbծ:kUe:!u+8 kue;Yu+^ 3&`iD@>G¡Q \9d4*>A}rƴITKLbg0btdKک`7gEU#1:E[..vvqmBNF˙j\D-U,+ȜkP">jX}=܃ Ghy|=r7$_UuAp]ֆU~\Ehtz3ݹ ` o@6>p.d9{r>a G$ˏ.IYUc$,YVG}V8KRt!BlDŕ9)eS`‰OJ{T%B|\e4;]Y`FRaq8C,ǔedM*2%Ӿye``(\P$ -$r; iUZPpXn?<穼魷! Qa0IgnE,T{M?Zxִ{YXzӧ:-%)WA1He2$EJ\%zk Ѻ%bWcˆ9:kjl%4x4%Uy**4JWNЪc+ S,m6&ؤ?B¼*::X:$敱H,87 ZaJTdI}VNmTz.fy:;&B\7Fe+\9LVWJHa]j9A)mu4)WCFNǎHF)B׾ |p*n5Πxw : U R75KB 7[0`A JFr%E~T 7^03M959&)R 'B$8 bYDlG9,5!6Z1QkIޠq}Ћ#_ظ9v@HI7t/ŀ Ѿ>4S: uHr(dj,6(hUvIkfE|P}v]d-AlcǿhV>ZC QoE?uξI|M"W.̚/EmR Zv@V{TK/ɫXJ@n,+] J͙0Sr*vP7"z.s6 RT:FZ:fP:2B"7qY#YQrb&'^bP$a)H E}諚8MdGWkp4\Yݠs;|/_h,>^.%AlӊkI+D$gK5GcN+Oî(8dÁzg ^gb771&2&qnOщ55n)%6n)F?:]qȝyyʏPTN̩vH@HKsiސ)xO35:Z.M PEԥJIr& UJe4vU/:t X'bI +epcIt\Aтc\+52kZ_WVIޓB4[!fYXt;|s˾ yĬo)a\uVIzЛ#+jB"T9"tRNhYl;륖d!Ҟ)/5Ilջ~\gZKclY˒TV''co:)*Yu,fyחxt>ٮ_ Lާ++ˎE닑K,CE`)4N0l4&瘸8z5[s')~~%q7p{XbXlr,@W>ue~mqo>Ri<_{3S$ּ4VMH]"$ڸwR4ʾR;I+Wڙf')A-]O7dI@ճ,3goպLbgM]Itu"хzk"ɷ>{JCCeN }ߓIOw3N=vچm?כOK߶oE>Mg;7xFޟ~7#_hA ~T|ߣ8a3QKьNZ]2cη[ GF S:7%MA.m9&Áy;F2uxgPO4Nll$ {]koG+q?$ԏ7Ͻ'Yꇬ5%c>D"MK#ǀ-险>U}UnEg0;Yt"p)kƇȝƆOj1XuvAU`UUE,l>*BMgǍ B])yQ]MIdzqۃ W.kmf.t ?`-iH{ L@)zW1,qrPŠcXbOs1 $l*YYS+2+m#(5bxX 0WR(S;I?nj9wO/wظpxl;wom,.a|JEh]c@ (%$ ňc=>Box18 ^2}q1><_|7ȣsMZn՜VtՀmN 7_g,F̌%ex1MS5|۷^,_n5;ݸbs&foD:d4pu,͜wӣ8c:Эdt1^h9մ( t8"t rymIl5dĺG^{Kʿt%kB[t발v8!ZK3,uVb;%}Vݣ%czՌj(YmRD Oߞt|b]_ndlMO/QgG<\N<0/~nKKωo˃h}SX8wdZ]?ӥ'"q Cj0:&ktğdVʁa1 s9{NAO8Mkj-E{R˼~2j_YwDUIF$r4X)TQ[W$hFK(B2`U^JmQzyziPKN $U AǐɁ&QM%9iT2i_J hdH*]YbZ2xI wjsD lJd+ۃʘ6vɕp ~;t9@x8S& k3==9 Udշi6r<K. ^|u,-4>_j4$<$WjUl 7\beSF|>AsmW6¹Hh+MJW-<7GՈLrxpq΁F٠%C&4'1{b4T<=[1*3n6# 4ZΚn$Z1344qt$񏘶vμ"Mx=ah7#{SDΓa7OH-X8 N -sgew˟ފlGML`,"X4. ˭J+iȗ OӋ(r>kBi3Ql%zOW,=>].GG[x6pBߚi'~O&?nw2x1{5m/jj1,w Ѽ$5: 7"Ź [Jz˚_=3ʪ՛x]# ɇEs3Gsr=R"w'W1I$~Hx>w}+:Dg2qG:!HO z 7[|`8弶 w@&E 6d@&Gi, bO&؂{2AO&x/8!*!5:{]RTeY[W"] *V]F5J$ B%`C@@i 7Ů\Wy7cOH7=/f^4Sl dY^+*rhNQVPU7XWt bQג ' muF^s;=,OI>y9dZJR2Yy)ْS0Klg+hR6@)IHI E (_ B۔T5MRx 1g3lTV'qn,q,(b :DBx9i\A$ tI=awӏ,(Лł"㥴uF7SF8z&BR1$3= jٰz?F|_7dJk^az2Vo;VΦ g鱗\@ 8Yu) ђiAd%iކTA*/֋rP3\ :&?bӐ ޴=ƣl_ Ⱦm}"_W>C[\vKx=R`Xe灹qlryؾs߹H)TBd ho2T9_2~X2erxkhce;Fv oyX6k\/'iof޺53́ KZu*ZF| kk\,مUȒO "1KB[%*j !ɘ F#ΦndAG ݪ-i]`?j[ݿN}o{#:i5G«֛)ot[mx^!;kR:}Eub  *OmVlYs { ~Y\V'7(]d6lKQ60N:>r`*p\b bDYX1֛Mg= Ԍ'7{hSy%o#Xn=a: wskpL!o%#)UcR:G%bE 4*)5-ѵLNB] T (gԇS]Twn߷o}G]]0нzv1fXF {S;T}PuMr6kdM{F_J|WUc?Og/-XUip8޼l%Km؟\;s5!08*/GǽY';ӏ΋3L^lrݥPPVJK2Ir{Xd:ٌ!E-G.{.ALvffm[m< 㭔Wt1>eXf/a8Zh <h_;yr&J^V'3Q\^F5[ag.3ҋ3UXVtVӿZ0hϟ|A Y޳qS}>F@i~@اL"|;"eE'zvgwgfgfAAO'I! ]4ݔ۷m,z"嫥OVpEPn^?ۏnN̪b_Ûh9rOIޙLG^7vQ-ۖ+5*Ѡ+YEXݎo+PI}G|yZ-N?7mN]îEfTAˍ=vvEP -{v+y% +dkɰ+%?vvUDȮ5'Į`4dUO]aEq`{eaWbϭgLr~/6- T/olu` POj:M#6g9i]0K3 ;`VoнQ_WFD(fǣҬj4#-WM\"QWϚO4Bxi-Vy@a}gM<}\EπsqWP~]FYW.4UѫpjU5{;L诅ʒVL3ɨ÷ fVK)} u.8k0YALveXx8+u.vo}.]>`[n>.u(1ŵ' ⥵ eAav4(8.L`}\~'ͶFq2Cc >dY^b"書wNZy`Eqdzɀ(՗=bԌj &&Oܖ~#'=桬v7>U# TF՜t$?ͷi>GOm۽av4~?ڒiA ~mP]Bei7jKFΣ)J G66Ow~R)+ijkˊ֜ C%T!Jɵ:9dge}̺OI99)d3Ii t.I15h d$8-)=;0?hiV.w?yA7҃$hCșA ^1L *W3]OBzhwiRmIӥzΆ< 7Z &Mg+7a'~VoٸyVފy7F{RTe8Qy%Hc[YP?•9OHlۇD LդsJ!vc+SLď*2U{é-bAVp^92Ppo::Ad/h$+  [,Yù!Hŗ& ЏcPlwYg d;s'P/Ԇt%=9jdBR1[> ܥvGA`Ou4U#!$A&hK! `r,R1H/x6%^񞐺%m]{ύ׿(#gN֪`H䎐ut^$:ƙVѸL۸eè(zȨ0(?e;ㆤ&dxcb.hV Rj Fsg6))ֺN6_VX}ɡ_ =v_+N8rhFrXMMzrHײ4󶟳P} 4CSF@Lm[|=7v&˪)6q>fMߣmG3֬#nFW/Wi^7L$hNqyao.- M\HW?_+Y1)K ω[j\=e!U p.F):À3RO(zr?? 2#@^ {ͳ^tUNJJFgCS:s'DN܂B&dR @ҴM\?$C1uΞtv|V;_;KY |,(e96JD7d倾; %We/ju&jדz=X/˸BJI[Ҽ,Ɇ9[N&3@"ZtG'.:Él.sL_"|0kr\EBLxI!ͣ9 9ǽ.ozj"B796zv,J|c[yCL8H{DdwK& +}5J:IsŌ9<#0VX3 9:c/z3~0z#jR7ME$!w6yg qYwm:rE g&k:L @ Zh4K2zWSkA:Wa"\Q\GHH 5 iP$յ:(.XR{Zi<|D&tJ^%Mb`s+ϨTf,My>9Yp[l`08x9dZ{9R9I6`VwL T#폓XXTU Q0:. gZ$"ɗ!fJP% F%CѤh(Ipȓ!&霹,d\+ẖ@M13(>,IY|cU?FűMS.< )YcY)iXPŲ2}Xm>8O y?;޾HAW0ȳ6^D+\N]koG+?% Տc56qꇬ5-2$%KkCRd4'@,i9>U]ujl/Kcc6Ϟzg.UԻĪ[rC֡V0}"m#U=vПƵLB͈aJ@ AW7w^3e~}֥\Iwu_ˏ!$5@>pMvS,l 3uHu)ZQk60JFr1,q\u[Lf׫g&Ζ5OFgԡ#KR?YZMX |ϖ6;`h۬ߢ.+~8t?NK}?!kW%yn)z9Ѽ Ȃʱ "WʠN$""=gz E (Jg-:eR,9,Hae(1fRl-INHbby9qKWE)I@/H R$[yQ0Tvp{ pWz:%$m ˭I*+FO_/Q W.`MӦNG&@JYfPr+Iȵ@`IBey' O35=,O"2 O#x_J~IrH2CF>78k57AkhzSðk}`3qxhT $QL*4:dk Q&&.,z`#J*UhFg5CC Egw L^Jx/ t3e<+7ơVء+m${D0.B8FƐdhLQ5ν23K }S z?gGqt|R) /+,,er)+),`7s8jo>&-a<"mfOl<쳟+R"yoW2|xGhAL]5d6_oh䀿.//t 2&kdc |llitˏf0^||Ɨg:'Yqt6[}r\|%l499x|yO'~ϦWAӯ)>g?Sa.%9;/  ޼?yߖC~KÒH+z2LWG9DZ+.Ïˈ727ї+N ɕc;W-X<h\E@i'GYg&I&f>kRrO/__\gdlgKMf`YwG&!!Uq)@-$V—󔾌@܁>oF9+uIE۲EB)\6ͷr<&[?}h4$%zR>bI'6'dZ{3%n",Մ[J\;]Nta n,YE-Ѳfڪh ј:&:(-S7h(BԈq]P'\9粔 9.+TQh;R 0h1 '^( If Ą 1`>jBp)e"7nW'=a'nm[c*mr:sa*b`V{)R2W٦_Arx Vjcf>\9tSx.V^0sU«F^U{)Zl7?&G 4g] [u]G lDZK$}ߗ=Y8qA,tꐍTL"JƲΆKQED%78=s>;:hvp~QlL'SH"fY8w`7\jQ"veD7y_eZ!޺Y-r_\SN5܀gAc%PSݓ j^U3JkNƣ}zͻn(b]kvZ|W`8G0[cGg%hMN>1.1/TI+hH1E2.xk-#Awqj` m< ۄ\9]F~/4\9F3ЀO14 jsZkx_lM$z!zgwݲو+/sRʦAI=Pef2[K큳4up>jO88C,ǔedD̄Ĺ}-(wߡ0`%HrD(Kk|ZSfS!* &KiEZu:Z^(iRȤUKtz봔\XUW ʐ)q!X`譁D5cH>1ѱu8[zܟC^S)݄Y8(2^a|ZZut LyP & 6Wf(أ\P@k!1@bI( SSmClut !ܣQjlۙ JV>m`+e=ǷLJV$~mԮGM}ҵcq. \70ץHK6 B =Xf͉O_<|!/#7'G0I`vфWL]}ǻIk,2nR"4+LE2$IVe)1l]ﮈs_|jГslс6o4LMz;?tH.Pր6,}=j?M//|qx<;3tn| `^qBFh h W`<] O{W/$gh<+ݴ)%rcU:!8keIQ=eu`}{p<8@y|lE0'r|cGj4YiKɰA-X׃JR"JaA.PA;kmh>kwg8{p׭P<koi/n;-'9m7/@'OԇeiH9=sgg5W`7Yp'C˺:{"@N)tC1WBKB49w&3'XBrp9 Rɂ6Q`Hb@~AebӦDoYO Q!baBN֗WG1bRUYaѡrK ];;a$]Sj+ ;0* gkdjêEJ=oC20#ZżY%,gNi  r+d2*Dƌ ӳ~崅wR( 7vlo4FMOjG-?{/6wMCs?>| [sⲘr3؊e6A^EA\G`]xph!{٦(;C E˲EtM{EVt(xGWGWcl [)ZCWWmp? ] F?]m v5ZJCW[͢+] 2Vj].nYN~Zk8(8&zD%+7E#:GffTsݧXҵ_o8[()սWQ/ˠlp#|AA˲{:soΔ |QƆ 6PY4LSzjUѳnCIZ}2I̬" 0'e5nrz96Fy&ipohn(q6cZI"e-thm:]!JM::@`$"BO+[c]!ZNW"Е6YW3Zik 2BB4empJJeubWRjb۳>p)m+D+>(J)"m]`c])EQm+@ˉi:]JC+&Ete{+[}h M+DI; ,ed]`-Xk ֬ "^x8]!JӅڴiegjB5fwtvv8&S Ќ`ǔK"nZ۽E=|F- G ʀaZT:/U'dFz! >O*&cgWo[,ͦQw [ŵXº\͵2 +mN2ιτ*.#ֺ<&~H2ZKz={NY̓=d<KoI; ".#ζC+YJ坉-3ygzj5h-gM+D)eGWHW(y K B4+nEtō%{\ ZtT.m+D| Q.vut%Tmr5tpi ]!Z%Rttu8tQ\U+)o ]!Z%NWЎ O ~(]!`۞UT-o3(Eg]"]Y"-+l[`$5+DkOW՟]/ ߺ0LNdsj+u"xR%kyyr ]ݻ`5-,xێ%+M|[[Y[^"7tRؼtҞlӀlip}PvhlA&q6Y"Bm:]J#dGWHWҀ7o?ޙ]}ڭ>+@+u#8ҕ`6`+Vse[ *tBttut%M֕d5tpm ]!Z+NWq;:zRLZ֦|lHk ZR`[Ǯ] ]ifT-+lCW%XWǮ%VT2ZDWXU 5theADYWIWsd]`%lk j4> Qv"RvnU4.}nJx|mn6V?f`or;z+ۡ}.V(h؎AM&OJmm4Eۨ)#7Z&{!O_ȳB %[꺭EF}9eDEg i IHsFK`R^H#c$YAMxEԴŷB7ݷi o$SRP"ڨw>Vhc QrWFR"B{?a;\#΍}8]JEEGWHWB]{pIF4M TK >?:pxvqrlU?e.ڏiq rB)7>idqt< ;=?Kp3~vhl:7#nU@fkég!__ݫ_~UZ@*/\~=q?ڄuj|=9|9KD\䒇\FirgL]rAx3%mbDjBͯOq]iB9?ꕿx9G{4[HY%zΏ@IUIԞy*Pe3|) &pw`wwr>7:2˵2f6PyaHK^Itɒޖ|5~3.mz)ox9n8)r[Laeo+5|8|*. >o[w[߮{໹vFo}XnRƥ-9dVt'')Zik>E9ܪ罥B$r9y."L]S^`)w6v[?19* =aUnqţ˥rDZpEfS//qavV-#_>obQO.^w0Z. `AK(MbzRLbBSkz?t\̠gW'LvH.RI!,p<&jr 8ߍWr8z&スHS?.^,Tɍ.K}Pڷe2M- SwНG;oLXZBi g<$Kj@Q{`W=_3):F3,ppvLi6|v屳Ӵ.x<,Lɨ326<'?Ⱦթ?nN{uԆske5:. z_g\q^R Xgx0ya^f8~jS Fj\,8fW@RonW\tt7͢W_ۤwZ1;CtMӼ'׋F 9L7G ΠtY,K.sT!tYf29f)JfP2us55ɫw \ $JޮyoXU]:|_;W9e4ƣ|Z .үOWZVTsQEۅfBj۵P4͊M?!ʚM+͜lfn}`;|,C-;SGKg{*ٓjN{gp8ж-g؍]/}'JY^AlRA>&[-8. *0SRٰKs_ַ{_$%0 34Ϧ4]5,k:\02rh3b2&P M_iۈvi$и&.vq7ެHeУ}N{SFCbu>ܪU@\zx_$ĔBh\Nz ̅7&n9.ך9X'Fa"`G7A0-$n85>&-LYŜ` 88|g\zt岜 QG Q`<~!giM4*P>Nfr[OxNۤШY+}>Xq|]d[h}!O\D|`H3ҒhyJ=% ;FxF@]ԸI ;b8be"F#S2%+&:@9Iyaxac%I P 1 x0yZa&'T{*E@Zcbaf{NևK/ Gw;0\Y=J.gtwt>2#RRp;ͥ(JR0T/=˟e4tR1Z#.f&F$qux6Y}ed&o~f3O> FKVw%;ǟG:-oݺ7dVk}7Y <. Y,.v.1?-ן͢țt6bqK?`x_0+ꁗZʪa]\`47(q|ۛ\t|>]ZPpE1CPrݿ^~| 4 R$%@ Yt׀}kZw{ AWA=! KvE_)NF?դ F;-x 1}ð-V) <}dZOc[;5](d:͞yU!UtK4Fy6e⻍Us1Wj:CwᎡugZHA"xZn1惏'O*gV}YMiSB9(T4jNmګͩdyyA3ЍоvٝS}̌cx1 /X ߁cٝלa2# ` u B&/!%6"ZC⃑&fm q.3z Ҏ@ľ š'Jd2I-/n!EWG`pL\[sA:o~nW[z+{Ǵl]1𢳾^0TViBFNZ*5Ϊ5TjEF<(@28Z]9^4m{j+y%=$;Ž-,9 `Y#U,]A s(3RW|l՗W}_xU< 2ʠ%Z&^{J6A+d> r*u/4핼P*)x&9{}B%L1 J&QsRQBR(aT~_ eCJ6xrN'ː;c19D7g>Fg O??ݜ?o]_eE?0Ik`L6):k *4i | ?ѶJn\vzq5Y7Y.|dl#]w*Rxway>u]61&|gzywש~sxєjhW_Ӷhfڊɵ$Eg Xɖ H+Ln5X/gDɌzO,m1wj!ߝ^يU u JH6O݋m> Y|.&t=)ivמV m~:Pٛ+9vj؍'@ Z>F)Pu2^|x=_2w.|<Av$!(Dm_yu)J-my ըk^Ju8_3@yqۦȞ>*ŋy=Z 9G3md=_Śe:75Z\}!ˮ@E[U.j'v?a9PRux6lHh)̽A1 ;GsԴv{@J%acUaȨ,9JN9$//1jF4ݢiZ]"pA` ,5hjj3NuXʖd ws/A!z3C<'=xMDU>ճY#{5!TZb0ZNEwP&CJXhs!գ?X&]]LY$P |hO5>Ҵ~HTɺH6yղˠh&j֗k]'Vɫt5u9,Y2W?%_̟>d˫ه X?_|Zk|wvմy#AzѤxd.HhgJ55mN;6˫mT{b)V=S %L>%_41>/6,:}~_L.rqsv>]`5Q0gq3;{7 "7]Zv>Us-ڳ'',z=e<,Q߿çYFv)3_Ruf'Ĝ{(B5Zm< Fl D95agD9wX \3rR0o}vHړP(TM!\$c"Dextu08+dL3`I$+YeHĨ#XT+'QTJ9HކQ$V%[R](JcuM:3=+Ho\ SY450KuD[ClL EF`Xu]ykFe9هv (ͰVrORADIi"J gBR"އE.xӶ-қ mIU3 ό/`+z̭&w~uAvTm _67g9!N , &s)u5cfNKe<HGcM{<74$P}?c&R@cQ١`YUNd g+XR}tus[+iRi|4,X5'oZXw/?MMOw`:a픸H *rv=O|`uAǺ sMN^wݿ~ݎa~Z7b'*;ݽսw#{! ;kn~>_>sB'Jh !7m&襕A8掳7[t PAYĢY1g'&m RN䐍YJ/Jg /A %v]A'739&L`Q ^oy:Cmz sx]7KF88Wu0qX/Ym3?}H6& UY׌vnu0Ƣܽ/}32xA "j$cJ1*y$ 868 &jݨ(XPN$EYg(oJzY2JҢ.Xkg)W(@juPRpeaBP d{ΞY ]}g];$NUΚa˺~m/V~8aq|6žp͡2VX1e.*&r@C!,XlgL*Kt) W%~;ίQײ}\cS󳓝zqvGmSs3._~.*? -˛ >bӬj`wOyʾIVM97Y1-4zGTXԚ\U|>*M9em1]>BCT~tb_[~>= M[(4FK jM TDU29\C$_8\LNMf7cхy!0tqjEmYv{(*G2 @S xRA|1r$ {푌bb0 7RVz|DˀZ tL݁w <1ɡ jr%PwBR>P5U}A0AIs.+T;6pll- =$Θ8|n׼޾c=ͥ 7Y?r1_bj ?2,w֝O}t2~ڜ98x|ŧJ#tSZ#<xqTHT@ 9:- 15ȿ+ETduBXɞg4q~.ciy mSnmw^,7ŗ#w9@ .|f?RgHT1X&x [2[;O^58y%:va yypG^3P`[ bqIBR'+UI i ~g}z3_|N*>*Xb "*G"<HƠ tI Я &a (>.NOq$B.cKK0$ n/&vt-X&d~CRdQȖDJ845UO?U]]rPQ F6{L"焏 8t{_6uSyZ*k'[Ս^8= hϦ_O'려B[Wsg</.I5@6-U{ѫo_?DӦ|wD-G6j'@ƹ4J0WKҥփgRGXR$m]IZz'Z@ǥXzdJBajĹqv[iƶXظ>(^HT25'c/ᴻ ̾bЅɗ/ȈҋdJ P)."c60gBIIt^a^H`:TaS3K@j_r};( hKE2'ѵnĦ8ʹc[6Q =04ޡm^86R1*]sN9)B謤HD9A&Vu!7Cff(訳cIYD2{d*((òS]h'Lwꗕ**0 "6ӏm6FD;  i9E.9D:c$ Ʉ%iC|WC dK!-MNjL3(z6[tWPtpbmҺ o+q6!T[>:iɶpq[cB }{d?ҎUQ Q> >I FHɳ\|\Ia]VMJr4]YY-#ʑ\"ݿ̾£W$@W 93R=$k7Y֫/bX^-+kmxavvRw=?╟{]~r89;gs}_0GDiO4G|#B3ty3ZO^-zrYc͚\wBc % ';@])vR*]1>k!s{wHv1Ӽ,ro)R HtB=7}SWj(HNe/P;$@%0%$hBɯVk6R+XR"ipdQ&5.JTgmz&QA!FEVʣb_|:y O?gNŅGuk/w|cN?rZRϚmlIqge'Siz}v9.;zvXώOϪ̺p26Zl: vNITBcQ"ؔH?%BIOP̑B FQQG [evXSjwGV9-%z(3"n}nƚyzvwF-kNLݼ牨D'ԃQtޏy_Ćx}E$ Bw6==iBfC@RW@ԡ3EU03-iS;SwKݹ.qOͶیISe1'tʣI4!Gg|_]0'z=]}7rs@~>7?'_k'U}SFU=_X[TҸ>ޅB'^$ݺ(ʚTW&͍WA T.4cv;*tV & Tjj{ڭ6&. dRJ:xL6PI#BP)JmJA6.dR& )8S!8js"GJbR3q+1&N'Ϥt_6ܻ|$/FPY⛛?=vlh>nBDFUR@FX#-rÿD2ZJ +mb LOoA)zuLZ*dWiÐJWA6'Q.ua5D(Qn*_cQ'' *.z"rTsLͩ*RsJTPs֜^-X`Vʃ*C*WUJ\@B_X`pUŕPJ=Zplm8.S޵]X,T[|;n06ao/߽M'g[&{yIٱlJҽ‚d[1At(ώOfahϋHG +qz 2? <1_Fqnsp\!,o,?Ze j4x|z-v@[, ZOcIN mY^x<1ǸN>-Ǹ螆cKJ/~q wK(Uꃁ*C+YhUR^ \)ԈW Aju0pUE{(p*JW/4:-U؈*uWUZTW,%dt pr v%:ӮE4LgU-^ࣚ~ZçI`oSR[ov)̋գDg+'هj--Mk7)eb6Zu/_f/?ݷۛ ΠoGr'h#ZݾW*. giNzUDoZ۾. ?Nϔu,|q:g ܡBש %Zΰ7g3^tJ0(!59P<\Te`(AZ;KңΆcn\޹i3&jO!){B!P)2 |2@Zwl%:*S+0Ӥ/Dž%_bحg筥E@Κv30-/KZMH@804QGHH6ƅD<&>琋f(uFF)#O ?F"S23PfT+ ~'=bWw[w Jk^b=/ܰ:Rg#bJl:Dp4Y>;?6VأĕjeZeyاӾ ٚaMƯR&t$dnjOX&dd+LKAO!Er5l՚<ѳ}1f*ƧƷyI kbϩQ5iGRPRcP$fm*QTṛy\.NNu4Ӑm[ .1ķt-Ble;Z2•~67.]['2!asy[Kw:H6}>oYBTdeeu<@$@v&gE^<4SQxu7,/"%}4 EZ10R`xҚiPʠ%cH(槭Ĺ^E} g6oVX鲈>D_NTd(+*hL.MQ(g0Dm8Cˈ.Dd#ASIc@^!䋔E;G]mci&} eUJ1UX !H* M5rh!ecmuH`4d4W=&9jObV"d fd:DiM_mұ U `Ktޮ 5w[xf=Q`Κ0[&'yYgoy<30( vGAΔ e {6cv ȟ^j!$^ E/r2ߍ2ϬѭqY>{pV)i M'KNj}>dih9mCt)zY=`Wwdf4+QMlVn'jX:G0,zõ 5M_-)lƞD@Up0NKLҒ.CedFd1h=hOOl\6!0GK'iH5@X6)vF.(~=ݜ`W"i `gD"eye6BFm9@<2$TSMj҆BX6!aR/1"ODꥦIJZH0<)cmwG;gKޱ57P: 7^coK,&4 *ѤmYj띂sڤ,_Q?+Gi^x}F]nRfo#xO :B 5r{1IVx /)*G0ȝd I z"x8 ­2D"B@DG TW@y`` !"k/%%Y,-,x`Z1r&31 G7MyiK*-,Ë;%se3I gpUq|Gf<,HHіȵJV b T9ZF(F9>2npyʥ+\ vy+9-Riy>%7HQA&5@>hf?0tPz~[H^ꂉ'n]}lll|V; =?r ˡ/h͞s J\]0P7,0IϋWͻe7ekg+VeMm'|a@ IBK6$O=ҳ£f~>7Ra`wb_b_]sSu 0~B_xr]?o&J0 h$< 8Ku~J{KMӿ3)m:77P0y%anz>Mg y+Eg3"f0z>_.o?M\侎^ ?-6}@82n@ 7+~׷׎o3*I&ޛ~F y㋤{xv,lB,gYx_m4[㋝_rƞI\=.ɉ˜@( {$RDS"ùQOg]9Ac[^]u#*fDqv7v0&ą0eONAi uRikǕk&D,5TF-7շk!΀iMC*)3tn8;@XІAVm3n n_U雟5}9S3LiAh졋q#D-8r3m`4]5Bj#pp 5!m*pt@Ygg0ԆV5477[]<w/X]ۏ ^'4+jmwqG}:6**z FGn `' tL8C1!`-QPcvXڑ/Gh_9R0Op;p26;&{cp,Iu+mT:az ](|vgcqJv9ҡ\x3NqKQs84;d!QnczM-O&ڴ>gO ɳ'/Ϧz5=xp9<`_ ߽Lqx\~}A+~m3*<Jr&LCL$t%_-%:G8&> OO6As.hɜޭЦr*moȳeyl»y|82ohof:+^ԏϲQԝ5o^Fo:u9|3U r0,~x-qs^quf-xx|y&?vzL x:tώ**=A4k] . ^r*ZrMI]XQ+*ȃ%|UZnRȰ`+&!pFD<TyD`B)汤^wgXԻJf[z4S8tTwV1*)APi.*4'TnBR9NNǞq1 bTVJ98a\e`yn,[ב =htL*"Q* J'XET&H&dpǺ8\Acqĺ #22QtyE% 0$Qam#Q$[CIX3e4pg4&ke7:Y.8w h񑐺%ō" O NYXn)i0KaWUηYk! ,Aϖ w9*Yz*C ;Dz] Mznn%% ݎoFamߚClEB0^DU5 tAטQfw&d&"դKF802e|ĥojjTljC[\Ydpv>/} I֣Y_{TBY) f"y1{d}"c4wMlqRĀq3X8 i չ§ӵ!`v #ܡLQ S`ݣ_A O/W/"v~IEDzS[a\HA)TUjIyTÀwDAqkrTa% = soTa0Hl3~`2'<OQHQ͍QmQ'($9:8_FaZ&|"yO;Ua2'IA.j2ӟ:S} NG.uI%RUC#8Ge? =[ń9|@l0n٬dY%fާXauF%(cFoL ӑru4D{S(sVA` j0-0#4'(xAc٦hm_@șǹ 嘶zO\HZʦ[G[|9fuYE!(~"yUP~ðۛ5;P&:Y $q ޕʼpPY yD,_Ep $/9B@[1pCDǩ *jR6e `w\ءa9$$V8 Jmc}J!2abraw,wvrgRs:+!C6Ŋ*" " $ 8 %#]z[})拾w~֗d ۮ6K.crB8(ݑGqYYT=DҪ@VJT:|#Z ZmVb4;2si ׭O:>ѻ _݊$3×lfx/=ۛOQ%~zvttM-R/xE7&\ qQ 7KPO[(Q|.JgDdb.%JCTf1FZ.cKAZJGQ*L fP4h$"Uo>jL_$.ջ͗s}BDYQ{KiSk{8nes ID}^c;:fqcw8?fqZy01cŁcWs|v|!Bج),!&gϱA9Nv`^O,;f&!gk[F2\|5QܕTF>% BGl<< S Q@^ gsqR/sJ)9/޿pw =FM7sQFܞ5v]܉~>_Q4kw=;ۚwZ[̉%7HnQњƑxHwdFMDRU5{gSqG*3(k{Z^H\S,:z+*sGNoǾ?n|"=b('eB2FrTmlJ4|S/6ד;Wޝ@PR7!+aSt ]FfsOe8rAyt&wr8A:ǖDX%I|#Fqǵ[kmz~ ;oP6[fZ-fm,|LْXH2s((ci.7?[~1#AAbw4gQ+k&Bۉj%`)8bk\N"`yhoP}d *dLi]h9|]1 Z/7vvQ۱|k hѽGG~=u7AɄ/~o+ɛY {}u{Ne9RT~@G<{:xqC}8(T>ZRlmSܤ2ZfN h@O1-yr-LeV#2sМlWY ?ܖ̚ 6öZPNISCNj_4cٷ[(M=9MÚskiab00Y֯ɀsG)Qp o_DʜÈm/y,;ڼ'{KN{*EiG[l5Wk#HrSKlK{^jʞ =AEXӺ)tE` Qݥ^fa<܊'0 "."bZ鄈'Dcԋ &M1i3) S|ƥŐvSje*޹$mܙ9al\5J=%vJ E<`uueFtV)S9S28<@-{Im͑4 ѹXwZB V?|(?|wl">n0Ҹ&yԁe;%7AŢZR܈R eS-܇ZIFL5>\-yB>7X!`qD5Qȗs2sSG`aŻlnoŖ|Ovů.~ܔ*'?+#0~ G۳KSnEz WiVdcDG9ao?Hv޳%6}g~b?k%ߞKN08陸&t҄&|w/=+鐃"Yd[^:?ޟ! D),(\-+MC|)M$չW o~]חw8EN*4j{kN 9S57NYcQUs5qu> }uUL>w7/nYG+>]s7#+ mlwg[@a8y]ڿ86煇̔歡x_fm.FN%y5˽Y%:O[a߂dRtWNL&IO՗H6{3vg7'33/?]!7/ej0?ܚK%Q|u.Zv8˅chq.n7q$98r wף_IVZ6B_j4_`)f-֛!Rijo=dv˕vZKsEPJ+]~sKk9_uի/fd/\d{&{&](sTs#g$\+KC&f<Bfp5eMhLGR\&i &hgLᯏuQ=c<$B6vB7po60[>`$UQű1TFC I='?8cYrL800ez$h*oK߲SR}q&/lrJf+ԌrM=hTSͽ&UՋ6(VjC=puZ|v#fu#%Jp7ph|֕x.k3%ɼXՐRFk!M#z,>$kвkXēO>.͋5\b )RwRrs[EQZ!rK@L/r:P ґΏt/d.ԓ0|pE/ Hq:! JcI Ł9]P `Qn<I\ q1 5Rr,uBW9Yjs(e D E_8Fl sX▲.EKZwX^(fp *- U:{rTa@68p }< RR\I>4v;QA 5e&ڕ#{@2T K{0`Ɛ)moeBƮ8dYHZ`&ĎAA. DU:Ij3AI1ΘZS rmL1_@kb7MM+d*90J8IYyo_ td4iljRRRxז) C&߯f[%s- ^j`*/Gmf"  xMKq qK_0mV%L&t6DZmSUf:}xɣ .!'~cY hH z* <@ 7TDDR ;ffX<80=E \ (:dn(og,T,eSY?}^5 _EܱFԡ&[]+I|2!}8痯fWqȓ¢VRgY0D, Hc.˼|Ћ:"jLI]/ɦ 0" t ((¼Q{p,r[EK kqnr@P=H@rf`I4R ltKqi #8`,ʀG6Kpq7kQl:KAdY(Tse5BC$("J4\oc,GХ4LpA>w?3r!@mD-Re5*+~=Y!]:>9MqH1x@Y-x6Rj9\Y8 j oa@Y# |4|[]nc9O3b@=t'ffЅʗ*BʒK,`}WɲLttie0xs,!FƐJ9:X yyi<q'vVJmI4˘kf╨W"uen(.f`tOE΀ۨT1`ضg4,:Zex>cD.0eoNr3cL50Ûn |0]b@x0f @w!lP\!p0Bs\AmDA \Ypf*@=HH%t=Y.) Ѓn1VAv֫ɰXGQ![a^0h&Sf޹T<9spYoL$o`okM/OЭ0VVE6 fM"2d Ɉ CmXP%% G.R64$ي9`NaT怍BDVDfVw XMg=@.3J$-G tUOU}pAkȟ \ f`0Ϸ]%nKS|xSI)2\ ׁ@ViC7X ¸,%t² R ;E +=B? )!pt "9>d85=[_FYtSɇ( V"R몉0m f@ YhR#\I")h=)Kh`Xc"*,߃jYwWxwU+@DN[M2f^U Ѫ7F5~UdBみ |O*OIJ|{-aIM'34 Œ uٕF߿*{gA!ث)?;R L,Omq@bpRJ`"k!CT(smU;EMz޻vswGi7,d[:vQI:49MMUwt?,]?w X\y:?qc=bҿfzmC~8"Vǡ;&εL\*gq4kH%=xεRP\7 Z3QpI2_J%LRx;g$ZUv9#|)S<=9ޤAC0 ,,a@F+ ,0b226YQelPelPelPelPelPelPelPelPelPelPelPelPelPelPelPelPelPelPelPelPelPelPelޯ Dצ+7$WJRY.!->yEqӋDek/ k֓Y 涼7@7w頥CZǃɑhhIR肈/sȵŃ׍r'A^4au *HYT/%~ʙ-bt_U$3 #AG  Ws!gg2XjٳU־mv,k }e3bvo34uG+0\Us;p=\U+pVr{D ֎MO%\U =pVrI1\p% FQar=pw4JA9pV21\pUEH1] W\%\Uk;pUerҬv;'9.xR >?^)|xz9JqeB`3vg? 1 U)u$ K< 0I tT|ǽϝev%'l:Zxj?x%JY dQ([*'Fe!GelW2a=/Zؗ J;:βNq?=}F?JW'ғwX( _LJM>o8̿v`mN>r#w=$+z5ݘ[vҼ`{^=ߩ}о $MEzx_i(HUDC45H-9E|+j5B^jgj+ ڤj(uq1>}? ~+qE+ⷷe]d^}>Kj0Kw;kJG{'4 %3"9U+((Y)#m4DsȘJ ^VƬͩNQe~(,zU.٪;A?~]'הg>q 1h2 +8o•*x{dW!L?t\ G?vQ?P>h :a3ΜKY|MR6ZѺ|X Ah.47tU U QTj:- t;m8VXApH D*Vnb8V"Ty l)#u{1HMq\RyaD]1)e"O[WZY-\CY1cl:XePWg9i`ݿ/d-]uQzE}n׷;%KZ A;_UXt}m_rSDIH:TQY)"![OB=j#f >ٍuEk%|_uJ-@pσ^=_ƪ,p)A{f,xQ,9rBy:,C\+kLF&Dئ- 43idu? ||zT /4Ez7#JX^t1$E?_ ~8B٢j[~="b)XE`EeF-Bh  -\dbp<=MgtfX6g\G_5 >a>FnniIo>g?n;0FMtWBL%1͚Z]BmͯEkɞOհ)%iuK%B2FôbJnFm8j+ڈ`yqZ,XͪFH>%#EӝU2f!*+ *(m| V1 2 C&!"5)ӨylL1 >!BRo]̜aWqǾQ7#"Fm/\SL2*HLSNj4ڗwA$Lu2*K}ej#!7J `1U(B ʬfFd[iK#a2?{FŸ%,$]%Ӥd{S=|jZ<lKfOtWկQڑW.?CZHrqtuLjd_*E;A.nLWH%)P4 Aȥ(J)44:{`9Fi)#) rrXaq,' VQa5<_VGn>c݆Q, ڏ`En,ɽ6+Ab&u<6"M"43a' ȡV eΖ)` sHt!+*5w5UY,f+=B[h#Cf%j"9I!.RБdlLL.'t;1W*w{rBR-aD^3,&%Aӏb9. 'ϣu9 [y;Lj#uOV^p}ȖA|7PRy0m=}gtB,W A!qxQٲF6a26*x)B6dHB);70n6Tl6|+0~1iW^~w#foˬo(]KG\/~QwyYֹoxW9t*uZq4p2Zи0?Fl]/GߦJAm溥_OO'k|ׅ/ܿWwPc ~#=۟B?Ye/O_׼u)P뮲#!-n=V}RkR*f1;iG&,P4J2hժIΉ Z%@1(ƒ jU[aխ}6;(8~b8j* #A*)!.34BZI\*4VmFXz^ ӮIÏ65GWjZyٞve X) KŽk 6 Yq L:+eJqDEu36g-PGơ#BXsnVQc%W7TFUHH!$5Z~9˸_K!,5SXޚWK5֕.?EǯKƲgvVU9-Jq5N99&ox0B&2V6 2ciOsBs!X8KRh, [sNJRc4< ^%#"%ΨM[hMVRy:]7[Axi##wTL¡I)B:cJee>FΎ|πڍ:=ݴQढ़7I+L[LɁQ2d D*0ZmȥǴ'wӏ :ʡYqnBcPܰVhfiuPo'A2=@jP+-ƃ4I+= M&I^vB["T^J_$("X.V;W19[3Fs7%51 3"K.$A (bZc.rؔXT"b0=9 R,G`4$3&y#%,jvF2H$ rL̢Tu #ur:*[lll &#|0:1oƚY,3LtH[d1 ypRKIG!r\cIaJDM )CW$L`4An =381r`ļZ8B`*'}s i?ΨJ+ʍJ7|ˡD"Ǣ2U9W2HO!>QV{GJ\'$9AO̵YaȶS{y}U˯xwӥoק/K5d͍ǒ߽;h=3JPokqP5Z Sh-O<4ЪpZ,M?MG~W& q<{Rf* :3+D :(d2x'0>(O3j=7CأZ%>O/ NxE9Y@ " [Z-$dTDS:J٭{pL/'͵2\^$dC{Z:_8mK{mۇ]gXh7'uxFFʏ7mnY;N]IOu<femrFS.۽ӷ'M4ruN[&]ٞvmȆFswv3)13_y?̦ W7;y=Oy#+ꖉv9y=c,-[,<|\ Ϫf)zUGA2mawpSi}0jwC *=Va-eIe1o5;n2^u.xtqC#}P?!ɽ}t>}4-_1V6v^~Y~~u{5ZNk{ߠ_hyAY\&ޖLG|\By\<ƥiܓZL\^|ysX&[R̟.*X.4LܔަM5G,l8&N ? *L EXR(J)p!`lQLg(d̖L*nVD`.)a.~f] YR (5֕$r$Bȷ|,Rsf1ϭR:{V)c|77R~<_6;ߟvUmGߪ{Rmө:m* DH^,&!Ӟ%)h@σD8D({j8K=E0dx@H 0g#tbN,8 3c8 D0o?I8H%}dCY9!ninM l4bzJwo Xe> J09 ܈JS|yoO&I̓K,Y?:g:[A=ynɾ 6bL,O^H+E8}i%!kJlsd]"y \i\ܣ1rGpKGW =Z}FM&@#Ȕ09yB?|bVBL<:cF>\'V?YƳij%|bE8_Lz:kVlVt^U_nX }?%`-l&x#zt5Đ-1oԠVז6Ҕ0i$gK,v)1-FmW/+y:ڊrczi^sGߌڠ ~n[^\rRnťz3)7}:YӽJ\{׮yPgW;_%⵷Qf> ~8_a5&WtLcWږ> Nڐ}BpR6J~o4:To!o9+Ɇm_'z}.mƷ"!K5ݿOBgG)e,S4<3'h$t-H7GWN;|vvImnI5/h4RW=)SN6/9=wLUS׵r͸/Df ޚ}5ݛ5g>;=٬x3#ͽ3$>(#jQpd3娲ǜ1z. g yȢtTV;<] G)XBp$&e,>N%C`aNB+;0樚)W{քkKe.pPjk' wZô$(܃VjX)dѓzQLӍaR,[ +a%v,%FhX2w1PJxfKoy@ԆEkPf}0DTtAeBoCǢz +Wth'{{Ͳܘ1\~NI+5회FAZHJz4`&siGh[xT1^4Ը dA$3NXl"g9XCVQ9S~fe3HL<8c E"Z$=L=Z>f8&a?[{gj{7_!pJ"`]$f; HbjF!IJדn? <}ywQjGטo}>մ?iN42^>;UJ0]Umb+<}|Q1<ھ[+%D%^IWa"y@c\y6T,{gx[= %*ƅ@C혖K1ڇ1տSE]%Hf4e09z<.7sH ^l{*1(b緕$E$TNn aHozRP뙢Wvt+0]HJ`pGt P& -,IU6hȡVreK|31i)?a?iԔSEYǷߚ*;HRY~yj1RcmRVUwG=_J#wQi_+ACGP+3M9f_ 03K0?FiyJ@և-U~ hGK 9"N&'ԁrkNG80QR$ @IKD 7b fΚ{nhi)F'- e3E[0h[#k%:NJ(XN}>JCM'11``pi_`" huziϧ#3dK?6CuJR49+ STw$l`UVqP=r`OƂE%G , fWLh칗*%nbqMw SH|Bc;3xsjlv͞L-=6004% h(0ұ $ÜO)L1h"_?֙ZU_j+̯苾Wǐу7[:|WB8G]ib˾YNuVfxOUib:wŜ$s+=7nS}"O|@>tUɹ얓ʌbo\8 =t`] vڠ4 'hDzOR& 1A})pF!E57V"FMℷ叡ۙ|C ΅y!H},d˰nG]u:rsϙ27<Hb\v%>>U ' Xe!dyau Q`(睍x-!#KRNq,z̝vX %Vgqo9A ?$*y+HqqB>65nޠSv@D!}?ơ4dٺiTkӈWed]>OԘUYy8a`ceʟoϪfRg?"BSA>QBO ,< N}GU?-ehQ(5]EQ}HYzKRg,geϿ2;Tٛڈ e.Y/S 6B뢛e kȫfy ;ýɗWr2Ǫ^R2“Up|wiy/f" w'^`歯nVG{hcv3Y)QfkYV+ߜ~p+%$-XUZOJk+-+L9m ]%-C*$^`0ox y{U9(d6jN@p:@4?O?~&2 ~9&9b!=XӇכa׋6pTSQ{Vϫ3cb{ y9qb gCs1[oI?y\0s,Ϧxz}EZ-Θ\ tTb(&+>S^YZZW:zmIO{{YGH(򹖜N)bxЄ~vj*8yqc #6^p)/kc셖rB)zCw6ơ]LVU+q[ bttPR++@-+)j ]%ZEOhę:z=tEB0&\g0'o]%Jtt #.U +ޞUx[ jOJk+ҲEtW&vh VNW eGWR ڦUy6`{+yͨS+@E2Ja%4n]`Xk*qZu+@Pg]F҄ԦU \T[*' &utxwzzpI^S _x#~p CKknIWANWUَzQce>EI{FT>E٧ s^SpkUA29Q5 e b2 ˪^Ky5ts-c1 Q\Q│D4i? FLrJ͙"<%0;˜|T*B=F0"Wc`bҹVpb#V^ u^~P^ew>xt(&-s%;zwP2yg;#!-ZDW Jp_#RNv^]Q*lSkZCW .mVSZ+F`mr`*խYSvKu8D\ о6=Q2++ |s`CW mV|Pj9d  g0YLh9uJ(ҕT69 \ZCWSONt(F]B\ &*~#hEWWldHhPz ?q/섿QBL |N]_Uo+Aˮz}^an_}K|]2ls*ɡ)4,KkzWi4gVuv⩙˥_Ϫan@Y_ *VyLP9Da[^qYVRd(\lO3M3trV/|?f8PEhxu-EL ;>~J]xk~6a#{V979ϫ<ݍM?~-w,||ۻ(ĀIJMWVP M}[{J ɠK! S#Tp[5:ўb #AQYfl_$Cb?30(Lr7ͯY堁KCLG k-zg՘IDk1hD iӨ Iz%l.]D'6Rh{aEPz1a HH 4@1QyWPpy)OZO;F>CP Lv8vya<a\?¤&Ŭ_rJ2tɩ 5༴bfEb`xPp&JXE8jrQ(l0"guT-<t(w0AiZ6 eggyviCN4!5L G4$%E usY|þ(\k,X6* ܑʆL&ka]iqe4\i36 OksRQ`o7seMC 'U[zwsJޛJNt2@~Y[n4~Saeo'~R,9xқb?{m*](1lރAVL`|jfɻE' wJP*Ww][ԵGC5KP2yy.|:ɍa6쐷{8Z|!߯ó~*$AQaahÕ)Lvj.,[z.evkO@Ep0/2`qn91;07ϵ<;NHjwWe&=,TxTsNw&U`RŨH棢) laigy Ȣ 'l :ބyv ~>԰x^ZpٗszDh*pTN؏XA3`GDW gKo~R{6=Vpwp+*1;݇_KRL_̏L +#.bΘpTuB[-ezZ c+V*OeLTArhfw֭䧻g\hY<r,!,c+2j/#J@5ED)t"˕6 \*‚;aR/1"OqfDꥦ#`T:_#2&"}e,rW ufa"|SNJ{ETFTa Ϻ@8/2L?%NXMxs/#ݜym71SPCCOMYo,<=#qI0E>h_J)HJLے)pB x )*CCnL؛m2~z{Of;ؐ IT8n2śGJv85dž5$:a~OBz£v'9Fl$'~}b"]/ij>uYlސ₎qƾ]a@9,y;ON\&lN+ ?tiLOc#7!v?EN[C-׿]^58$ʏsHԠ;ݯSPmMߓşݴdI]^:Y.N TOtյ^7b:.5æ$A3lvI[HG?NV 1h<%hu On!k5knţxjB|LkLSφi- ;tP}J[OlKkoNGý|됟x's;Խ/<4l\lb=}또mBZ~-i?LE3c lzte޴yT]Ғ_xNy05AOs7WiO#=B.fgW5xNEVۚQiˋ9inq[1/;=7*..l7ܗIGktOnr泓٣޼)r/wYP* Ka>a6[e}"{_o'Sk6sg LpQiF]v^ ҦA[ LQ*KpH?K EBS -n ɱrl̷˹̭LbEKJXʊ /oVK![V=}۽ܹEgyɍ"` T$54!$Jz>)AOG-`O-4U_5QJ ҅;RF+ D![Wfd 4Ks U4~5䞫@8䝾n"1 #PB6ȃ֞WprK^_EKi4QD#h%d4`,DQGA2E.(W~i+nⓓ3Z* N>A{P:88*5>Jm㰞edw xJ8/u M÷ǹN&%s6)V.;OB~x?ݢ۱~]h$5vҥ8qǡO4]֤ %,A-?osw[F@DZihE}S}rp ;iI*5W+Zlڰ0xt9)}9hH@sdG.Vcb #zПYc.9t.U_әb#~\i4iȜ M㵶|aHDrE9K^ L\209̝UݱV3@#GG&2#s:(t`V =ȳ_y#\/_\ED_fhho"{tu \߽5@Z1vWy_Ȓ0%GqvZK8[p?PՇxMoˊߝ  ow5b0=7\f=\0KL>sKٜlW^ Z^研ܣ-Nh? ! J:gd!UX`x^fL0t6Ž,U>YRH2H A,xG-h /lIѩ\ Ւlp1~KyFl1NBm Ec:FBl?;ٳ6 lxQ;H#Zjz $- QAL:hI\ dQYώȒu{S$ +aw/]A ,*e9P%@: d\>$S{ԊOIC,^ ) *ZCm sNz"eGŵΥҫ)p j7J6CU2CɄ &pY˘* 0 C43yσM>`cDKFJ zP7VdʷdlVӧVKNH ^p 2 =*(E !eĽzK0V`Z%!j-A/YK=j}z!:jNY ( (!aO6y% hnYp݁(I93-Jd4Ci=KkA e˹|T_Ѥ#S$ap팋kPP9dDe:pv>TRGZ,>yx@&|J]% h}9 "dfe+M֬m8-HJ=cSN:hdZ9Ry6hRV M ϴ(%\qU-$h `Ϛ^{%T'Ґ֛`6z,PRf_f{eh]RIQ`C!rv\eQxa 2%BBJO{ +]l*rt6Jt7Wu34e sR {`nRHy]GlRYB`#(W8Vy^!\Ad"0\6%">8ݿ8q:8K8;|0pHGcM6<44$i>xtףgYq:q8m`7̋K2/fm!z:z }5,-*&['Iwep Nd,8ƵY#sZ[ngYiXS׷6&iSt^Y,KRAFw]G߽{dqs?T^f}ƈnҔ;s+>QY4]Y&:Nh߃Rh4~,RWb+*kݽսw{ZPl-:b/}L4 a޻v]# o߽i7V)B?]fztv k&'˔ïd ?eU?fkng@#/]邁sҋ*)ife9Rѳf3%}9jݛ6TDh)=ݩws۵oDбFE :~|]ڋuG~mLbŋZ3M~+wٖnKXzŒ#rv~6"<u ?'ȡwϴ Y=ݴݬS'ee\%O8c*V)8agV)=wÂj,])ȢT/ `,Xd ܡeܐB&lL1V%[ pYD&=p[6p!q]<ջJ!2=x`=96:<ަ[2%y#ϽdÏi?`֏ܝtR.w%ܵOdARmb >O@lvf@^ w8쪜UwYuwz;eֳs7hȷS,C2yC@ RpQ)iYeycBUݎK53PWE;V0e>;9\+'=Nִ\U;#UvQAnXqe﻾*71^ x>TuSN+aLLJUz p 2=y+&'Z*xOFw6;{sa1{]ub/΋saT^lr\ s#ZLKPƛV:a }ȨSpԇ?lIpqޙĩǝ̮r]{\B¤IrJ^,v}T2Աʄ Ug1fpew[v`{$굘b|@L1> `LF8d`"v:ǰĔB D^*[^*yLT08 Y(X;km"Cцwl< BWw,8cl-t|{Wȑ\JxUy5 4 d"zdBݑţWj Hb|yzk3_)7qZv=L(ʤ~u;$>ܥD}F eڑ"1r4OR$FPA)mTZY+OZW7T$фP*  5.Rg:EKbLx-.Ij~?^: *֮>$& (_+:D%ɡ⯦+m4_Aoe\~rTA )pVCH%>tcRD]];@%kAٸ :0DaP&EAMS+IE=Os\F.T,\Ar!XsRo</D|?(x뜡-K$)PP+rTNZ5lhPHMҦl'-vLH 9o:+7 j#-JII Pƙ.DFtT HG4ׂW!Z橏p(t"85/lP}NVPQf$ͩW<1;69;֡5 {$S[msrR%ZYMߜ{l8OGߖ@ZYyG-z\Ԋes+*hk ">SL\e*7%y7y*#9:$NRO9 88M*I% Inf zfOW)+\ǔ ʅ Dm۬WYmٮ?swDNGfl<Ѡ`:J.`M`6(\Q- Dylr C69$ NB *mGI L#2g=c ,MafǮ-jfmѱv`_HQZӒ&='bH*$u9Gќ#Nz :&!N h!2VQ!ׄHdE42r^.$XѨֽ'\|XLˆ͏]Q̈cĎ_$0"0 JHƥMQf3騨e]%c4%D*@8 y5D%U<|2P* >מ;!o]p6(!P;q9[\Mu6KvE]3/;^|1Ao󞆓֠=ɔ$ _81Z*A]h:َMfGSPƇEܿ'zFba96O(Nڣ vYqV;&JI-A+m dl $H~6Ni$;CȥrϘD' I&XsTVC}$&"2K F>FS 6goD W{[*‘<4T'oG>:g3{{\S={՛9Ἂ2QB6(&`˳ĵЖ;C]WD:υfB!)c$pP8j6rv& P# MPX c|Jn4g*>ML!t\ e T'Øժ!Vr",ϖ]Z.0& |Ƣ/}$V(<²Rs _'D\ D3<r:XruUMru5֤5Ƙ$)'43y73jM>˙6Y:x ' WkE7;wvև/*݂/8:w܁Vz+zxx=}/(#/?ZEaΓr9pຨIYE^?+oԓx̱x!?]7{ϭ٭y1E|;/Ȁ-\flw߮U{˯fR`>Pw ~fr;{?P|4o-lQ`xbeOԛyyT̔ÿ^+݋3dRh>'U'uhFTu @D颶 @8e]ggvɬ>5ݲPˈJUu"$)DbF`EB",($0.oP7"0Rg-ךROmZ Θ7%xGU:>\+e, KV&_.ʬ8rZϚ,:/I63_':zҚ{2+r|8v܉9WGI>Ң=gus(aY2y.Jg+I:I}$?$ǖDhqXkvJQ#;wfNJ \%!EޕLD[1 ,+b1̛4CNɝH=i'9 =~]jX6_ȠBҕ 8 QJ5"Vtbh1B.;Kqd]j֦ iC9VUǒ\=wfhlx4-|CE.a&P|H!Vlj.᧱+>ޯ^5ӱy78~8\y]XKƎY\sńZ 2}IzQII*AЃI٫O)]j"u qfKI S*:)mvDaY 6J&HlwܘdB~% qC 40ch(΄D6 9Kujޯ8ҦQ3.mw5T"cAe,cx"$KH@ RPiU7ZX/UpV4Z\Rp:o-w^5􊄨˷ }X*7*!N2Fx'`XIORr] ؐL6Tgܼ?ioWEk/OΩHEOeNmN>VT! X2po!Ц2C@;dv@qe*>!Ӻ ᚥ0UF&63+9Xr/FZCW.ohk:]e;t%51ԈeĴ2ZxuQ Ji`M $1.BF4wr>ҕy5thi тPM+DDΑa@Y*]!\AH[*et(vtjm8 Y-m]/E_,Ǎr!^i`;q6+=ϏdlRb+ptUL+W\R0b| MZmXkpnhu5Tu 5cF("BLk芡9j^th9i<]!Jࢣ3+΁i"B\UU[ 5MiY:: ]W\A #.pi ]e62J9ϑjBL5`F+OWR]!]ILw+ikADk-]+ 8ttut⟩6+̹n ]e2ZCNWh:G Œx5^_U08 %_ GCo#g`z7͝/P@p䉖.eP-8e_ zn8ɮEWu]o_~'7vb7oP/tR;KIYE5rWA񇫍o=h C1vMdZX @299T?^u/w^G^akü:F8.׽I8yA]I ])+u j$)-٢ kAaevLߐ~~P-O܍i{Y;Mo EoѢm2ɰx'"Sr ?$5b*1C,&Uo/~xwzScj5h3RPZʗY5:-WZﱏJ^غ&1[+,M~RY;j #ҘGipro~07\{t%Ow]b8Qf}i^'n۹2ߵV6qfFqm+׮qܤ S 37FصL.ʎFX}F lK:͵G⋉\9w+16κ7yo퇧J>wYP(pLZ> n~m!ޯt'ylԋ\cVݲsBl01/M+Ջ2_4`;,本i%ps܁+6#וA ˔Ϣύ;3x_z#lXbwnr3}ټhi~@$V4rJY% CVlbMo<v}}sOgI/?ěM"ϻKc~=/9^@1o7;M1o7{1nb DL׹Qh Yd%+͞ DW8åaTaG{9P$:DW,G?k\ˣUG˲t(W:A"ǖ@t4]u֏BW3Kri'5tM芝w 8k`gW(t骣\ ]9Z`qƮ:0P;вUGI+]"]2j `m0t0ꪣuWWN8F/bf\ta -(ɬtut$h  w<̝/~UGV3Е?e8pd|gpZ9]MC6MO+Ջn*9.jZ ^qQ ;-X2 Mv+0x֤; \ IZV֎: gSݝL7XcwvJ?*=oMLDyG}Pdߛ!}_~yoJ?mE?oBzyS~].t͋WWhᘮ7oH]ks}yƫ׷ރsݦhK!E_/m+|=nfoM}c?0_8G4gW⧖ڼ=oG "l߱ƹm_<,,TG~T˞~!,js>3  qݧ}wHxWK|s+QݟǻѰHFC5[&.&i9R:7vYGKx{|?kd}Crs[v .EgZY+oIDٚ":J.-~0P܅ggvEKؾU%$ѨB*ea)5)_h;U~'VWBC 5r)$P}`E8TҒ4S-e 1{RZOnNRkjPkA+T2#h Q4m[ѱl $Ez߽j_,ƜK2Ic*qd6AIq R-IER hfӈfhԍMٴViGՇSIן{g"Z#)^n=EE6$N^&VX "LAt(N0 ="?A)W44zXZrhI8+4Q+$8D!X^ZW* I]*"+/>fm2[ed<%C|JH&չp8o{ZTM)1ڄYTE*Qj6x՜ɵS4Jg/ :>|ڍՠ5b'X-E_#8:AҏkcmD>֒r)ՐR AFK}I)OruKE JPENYb9[OH:ɨ\UV1Z0f$.EA ]P֬VjX&"vTFEALxF2UzX)K[bn@E E+@)h-Sa]S ٺ_qx6YCRT C!V%WzU qv0TBC\TTIr=I!0Θ<'.Q(9okL64_ f MgTTgb(@Hq{ fYRRX6[_`HX޸+EK]l-UuAd"Z_\\ fR)b*AB~)Qtb5R{%"#p`l,@@H}(a".õ+|K_gz' |/7nk1!.UQ5 HN;ܥ~ /p>Cl[x<_}Po7tƛ~/?y*h֕^_6;4ڦBOBkT^JTYd6PCJтr2@]XrEN !H)d"B5 A,dVLta?XѼ<{*̋`$QP}DHV^H!38hGx)pu2PFUM U;VnLe5D!v "}?X w?y\Ud)kb,ׇXk#td7Hcc.xȋY}L!|KB @aZ| $$B/L)},=kqGKT׎a<k^FH!9KIPRm>𬚑#Rن1<>H,!Uc,AJ3%VYr py Z{xwGDM'y? >馢}L]5,j )flc'r(2CIj%ot}#,vM55gc޺1 Jz+R 7ݭE٠$}Ao%7=4Gu-ny ,>^on]^[uݙDSA766: 6Q\ʥk0D(rbY:JZڴaΣFjQP%›AiUD@zVZ4hp)a!/kp r!ds ]a7zh-8ghvJd+یLv0  )䐐g'RnC=z sgcP >"$g\{8;U/Z+B xڦ)E&y:x0W0E*$%KysѺYV<6Xvfsvy>W O.BbMoa" zyfe C.TxBLDJҐGa2⓪zրAڥXp 8 =Bf mX)+hGϊr 9E@l,2ȍLH 9VDpkH>9. 3)% @"USEc 1X~5<5(Ws @jo:k /5s%U:L,bw:rfF8H̐Z 0_xex_O#Ĕ`0Fp~oY2t;=5!< 2q,qJNqTV@0<*@p!`~V;M}6UKG0/̎YY $.Nd$0 TGR/\Nt\ \id ו5 M61j4~ _(6F)hlHIB쯿޿"B'ːbn0M|1pZx+?x2~,t^4\&h XBo,f]pn:_@I'|x|̅=b=XpY|gs,i1owQfz8?jna' *Em]'A%OpkۡC{R`} Pgk>s3Co@qɉ"H ""H ""H ""H ""H ""H ""H ""H ""H ""H ""H ""pI o}"b  rqGyJޓ@r !pH D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D t$G$VZBVB/ͅ=K"0C D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@ fB'޼j$uKD" !레"H ""H ""H ""H ""H ""H ""H ""H ""H ""H ""H ""'n/~qi:Xvy}^\ZP w*%ϗiz>Y<>K|F WހK֨RLtҍ1#BƮ@/vj_b}]JgȮЮKGv ]\+bW֋}+P)8#:@; J3Lս+Tk)3Kvuv=+to ǮP':+tGvϥvkl_ :v*5dWhWh V7vruo վܚbɮǮ7) ײ}eϷ+Tuv`c#^{!nr+/dM{K;l ʓ]=Rb4vm\i0@GM{s{8‡ttRfK |jqݡKp^\n0;nխd|$ᨃmGggj!:7ڋk6,7>[SME4JxUA{o)iu+>>+|'3@co-Ao/O,4m0=r_lXn'V m dYőOcz'כ3aQMrKOs|c T)$c**\CizcW(W5~ UzMvuvz}]\|_ Jv*5':@Rx1CȮ@07vr]Zž̑]]inu`{cW 'Bz]JɮЮ`lK&؊u+POx]JAѮ)B2ӟG(tgWQ|]]St+{s :v*=ؕ軋`ό]\}t*"VJ5bGL=xnbkW;}ewGu2rj-Փ^1ݫ(Ph.s*ĽFiY̠>MÓFE{YUue|:.몀?j]q:*UҾ?;T쇛l+'4EРφцY_Y# 4>}9 ZvZ1iNEt2&YV9m͹ɲ/2=9tԜ] '즕u-rb: |#y"i ķmN/zsn,jYc)*G] ^bLX]6TLsg/i}Mav߼~=7?cnjz7Ӌɍ7WF0??BSC:?iE2m_ .Ŷ]gq ;Les5ي=wa@P&5KٿsO~^C^w9]O: ?FKݡ\jZv1w1wS+^:*]8 B¬'N_ y]Zv%^zCvu8v%!m}5x v?]PmP9wBJ)M+l]\bWֺ}+TڕVn~|CBFŮPWnWZyڕB#B?`+ŮPnWs+aӵ+l^E;UPdU:3xvgOv9++d_ ժ'PkWhWpY ;zcW(&^}2*'f?y~+w=TnK@&ک*@nJ${vPo/a쬆_,c͖ @"q7a 5[XYXu` a(A%oD(61DѰ ?I,*.K.`a\Nai=x3,7v+6|7(e91_?N0Y} t C)W=n1%ꡔot"zv"nNjvߜ7?IB6%٨UQ|HZձZcpXqFh&mFWstS8WhBofxΚ uh'Ah.!M:$T2ZG[UqSUI#ιTEX&\Vm_Y=c9'a tͮu;V͗8} vQ{dSUzzbkQ Lw4s8'&['1棋_HPQ~XE<󹷝sS )Ԗ g?kFx&@NP : el6&0+yxOӏp(8 - 8EX슥A#BA', "#[%a<:=_| Zۥ]٨^mJf~6 f,0'B ۚB2*e0Pp۾(߄W_\WwSJOQ徭d;\^tyMoP2q?`Oť"32bRb7;p/SK_ efwdffX6|<~sjßK=6@Owws+|--/l%e+|T2/r <;YBpS&kNVp3ާ=-kI)kxYzv]m^q㎸0GojgUW[vt݂on7mxߛ!Cn;ʈ_)#vmȺ4be@}}OgMNw~Vx NB}O+S>ŬY ?~O ^6{~ʽ?*©9:͂Ug^F|:gq YN_gg`S(1V%%U**R SF% N.3 |9l#Y 9M/[&|}|6]Dܮv[6{8]_ }ViM8_,Yy_;wݾv>pfxI)KoHEVȘFW[ ^󨲵b 6h<5:HaMB*5 5X]B'# XxwdztQ :qݛ=ޓSyeYrC:,O^ yЄq&Ճͧˋ$DbM Q` LWqZf2!HJ(_({P '|\0H+cSqV&kX$PE2sFj-JRi8jK @+sq_J b\Af+*ehŦ/pgB}0-G`n?qb1T%,L՘N^T{#-`(˸%x ԢyP}*: K" +)nZR39_ì]1 ai5q4 3pqk@B(r5-:QT̗Yj:=UM>l/hM>+Ak|>hW6^ %|ݵ.SW`P[un9ki3е,^6.%Y̝:#dt5#׽r\6WȮVlɆ8*T039_vbμT/T{$/|Q/(º-12>[rZG_'Ah[dcYĜr&p児>DΙMmT0$sY|+9ScfEtNV1B8)!J2*k1P(oj*7d+9_v8s_\T&צoQja3KIjzӕq19s,?{WH>%c .E @-$ٞ`[-YՒ,ly]YjVSU}0kcCNsE2!CȋN-@@d!kj3mk~|ɰpFSC-""qۊ>+ LD V1M)D敱T ">RuAPR%gXE1j%'ؙ19 ] BY,Wtw"v&nx#i():;[%Eױ]]Vo/iL#\iu/ZsRGS1VǩCw=< [okm:z |Y($FGnW("9H^ʁ sd+RUJ&iGc2$Q\C Y]Iu`vA\(F`,ɐm2g4 AKڣϙqe@ς;F\mSOpl31U] hPEA'OFL['1"pMn%RT)'GoWNpۛSq>;[:c!'ndz#Krݣ+~6LWW8 ZxVY^)bRŹC0Qj9HBiV0QsdF&&HZ&(| !fq `CP%f-m |_Y=OYD$)8MfBkt%Li>_ ӷ{JΗ:+ =.Vڡ7h>oDXXw3>'u,F ;܀S#φ6 єt=O 5)Ib&&Ж@B0SS2:T(F(#*<'ޱt&Ξ1הT̳PZ'+6e/DЪg+bAiL6++ި& .ŊutdA[NXsb9l&G 5p;贤FόopE6gWL@݄URXMbdC$s}km\HoS?vzm_v#vu5ygg`mi GCZ.PVә;J头.u/\|uUSѢQw.w$u1U<&2zP9smˤ-=o޴N'gIZlP7+_oIWۛI쑕E+0U*Iks."nMI>Zd3LO|,z<<_,tJ` !|P#ަ=|0$ N 9e0kI$@T; L {5$,.Cs(kPؙUQu-#!BÂ{hingk\lsYe<6mDCGQS=^NlI<^=Ѓ[EIu*4M(nˬ_hHq.ӄp]j>zs;&]]!YOݬ>a ͼZ{z[[|=pׯg3cR<+wri.n1tߚQۻAg]C gdI0&*ă6鬢 BE|PQw|:J˵Q "iK$PRpڳR]:Q㹷DZe-7 ÏbNVee< $Akn3 N2)L`y䁎"lP^ywSBrHLB*+F쎧KT•^'hIg#wLN7",(JS2IRK3.7=<ߛA(.b)J?Bҡ^;Nx_ZIf:<[6kmoy:<tU#2yb8r!PjHNT&dmS9JkHƂCB)δ$;a59oCt %Գ@Qמ0pA_ȎRF(p3A;x2GDQedB#j lP3M^(ȭ?Bs:BoOf$JzG[7?g]-76UǣCy\?wu:GJĐTz:y} ]?1!iI^x|/$|BᐽG>t۲a-zdoa.,BnrwGwr;],}fEj2ni"-?^y9{y-'1O9Mˤf*UgbgnC++}hF94ZZyt}yHx̶z.CA೎2&5G<غ1#pQBx!C>xM|zb,!::*0<_,Qo#H` S\Lj:)kJ&>cR7"cv7zS _vgkc- o/alI+(^_y]?:ą'oq>#GE?'omZ1M"*IZ-R Z{ 9ssXle|TDOh6J1 t01K/uO;^wg[S-e>].FJ Ϭ8<6,Z2&m *٩LLR;rP-\dt>\(`Pho Gx;;g5=0"+_d*Rj$U[|_-͗R|{E屬Mg9^{%Դj2ic]q)dZ@ t\ZQ6Z.*]N >K`RJ.@$sR֑lE"GÒF`zN7XevGNBch^(4io,Yiy,WBDȣ &hqpKDdOR_kk+kC(D'h]LV9n"a2 xIFtG{EV⇘ d 5 cR& +JI  ާ\,q%_>HP (p4-!{0UËQI?DyʹVi'nfytrO#8t~h*NxsU_ `)V^ҕ|L]cӭI+[\n]ԔhZMKh1gtglz#ɑ3Wt7%~=ǀxOG$3(ږR"l`G*a|3"/.}XGpE8ܝ\y[HOyp+huRs1~ͧ@ў:M=kul|n ѷl|.O_WWp@؏?^G~ah.>XO%SyBjNoGZKNjU>˯;z}:-~|ogE @1&OLP`qj_郼H,u:x[/ i?QimWo-K&)Вs?1BKv6+}:: ^/[+{=l $܀ak~#@c0 (r{RE*X G#9FVdϕUFct=X;z_QY\J 4A"is4%G5!uĘKu#E@N>}*{k5-9s|e>1C=4CPGɇ1=̶̬V)F08@&Sϓ2(aYZ=['dI\A6Dy$l{0bFeIˋH.&(-*0bF(O,1ZV|,.z!"' ,j O sss9Z32gOϗȟ񷷼=f.QLn??'fJ GPG3v}%d Xr@R{шwl;ž:6w}D 먕M>F+G%R2Yy\_*'oFMkA+F9o h7"/$-"pn+W6%5dMR)-3pvgE~* $έ@ }r4 cTxdB2"!bz@ŘDvo9o5:}%p#XH8)b(О De$'dZZIOz6L&Sѩ}(@fh'B(-Ե%,HE |=IVd[>`8(A%GQdi|.ڨmҲgWQMኮ Ip]%+Brc'@48zch)*`Dd+"&LAY )nr&LNs$qtv 2 o׹YQ]UY z\8g-`CH K[g?7E7[lے^|vhQvY_G/IDRA!` 30L:(s7f1@.nss}{WKtrR;gX~!<sG ̡5T[B5Sqm*6*6SnUUS[*{?9$ҐۯYS/g88:Ѡ y8SܐWpD vxÓɡb/wF~m>n3S-]S-ݹF Dth=0t| DҨ"K91l _ MhuDҹ yxȘI>IWeT[p6@(Q̎l8nt|L'yӖxdn}(PNs>/=tw(ԍ;O #n|j>psʀ3Z /_(bU$2d1 G1 S{sݐ}C*7d5P! u6"[BvD] -zMtVM/L2aVCV:Bi@!BQgUT N鲰dB0>M^ې g#ps{kE-}>۔XC>}b< *2V2Ty1)02HB=f LY7r&?Z] lMrxrl}֭h#tH70y.K<-pUs^xα^`ı"dyG2Ir7&GJ(@1l@'O9񏕤\ z{gj^vj Džˆ[ XLv+k%hAڳ[RݺVՐ\0Jٌ\1jF){2": +64\1Zk+TݺZ\q}UGFzߊ\irŔЭ%ʕs^+b\4z]Rnـ]#W+mCrEڶ 2kFw;HԎSN%ϻi?YvŔ;K+'А\10j'\-]1-)mw(W^Z/tCrP;fi]wgȕqPW%~,WQ*4\u伫ip"u(k]^= Vj:M9XGHMUu/ՠ`G3e:нT3&ͬD3b{)ĭ#.A:W\q,r|ÖJ+rXیsƸ8gD B1R9[su+(+؊\)BarER.W +׶!ZՎuŸη"WD+].W +Z"6$Wt;rŸF"WLvbJߝ%ʕ֊䊁m;rŸLbzY\% jre"`CW>h:] LBWρ>Y!A;v7>[FDUд뛴)o\:˫WnbtsYs.ѴۣFdmxW~n1?}7K~ʫDi{|{Lf}`[ϷMbU9og;xk7mO\]?lt4̼;/ޙO.lJOɺapov3jO܋]k-BHm?u #PQż|Dywu~BL Ӣ|B|q}f{'  ͑PO7tޏK@ڲ/}=^I;cKrƓ7ljrTu&FFx*uω|d,b;|Wg~~sI=k&>A{pvi3~of`֗˳}k=QQ*R5=A嘜56()g,";o ד5sʖUc.TQ 7\uFw+T<;& H}l7jC" TĘH`F.UV ڜ,:DOIi~N"T[%k`DF3j LzNѤYI&Fce}~~}I5K;WjƇ  3Jʵ`kdSSLj92Tb>;Lf c#{Β-Ʊ?^ D4!m񽾻uD:\Y+ER@=ha)cA4P:iʹc0 3!MB̹4 4f3ΡC׊[( uqQED>Ol7T_7YqTHg^<ü 999X|3?PlHpޜ@cV nטTC*)*F[uYuAr=RNZQ}4EK$ѻ);R36q$NHHI?7 ҋJSh#JK79* X )%$ZjlXl`Ƹk.>Pv,y2bJ1lU V]Bڰ 5R)0f9xDNU.Y{u=5B Cm.5SGގ`fI/3ޅ *)쬏L9'X2rS0ؽjؠA[B Z 9xoM+vFe¬aP6ܪC:}]1Hm1lt XBs+}S`1uV6 ]k1bk.loeӸ0`-l` HJƬG6Ʀ:̡  "pajGU((BhIaSO)8/H1rlG?]eU#RɂRsd-@fud0 N#-9Ḡ HMQn2O%CezTHΨtB2K! ̫MJ+G;>S͐jPo\?7(c)(d_QҜ` ePDdBE@[vo+Z̥댺5g@x`1g#~ nQ?%l(%J u,CB'x(y3XGf2~.~+B4޶rAN)(J892r$XT5n| ¬=kUyGD)J6􅲎O6 ڸmu3Rq'FgU]%ѧVO=#̼8G5t_mRI8(l,@5G@Hd;uB.έ>#hҴA>xO˱~EwvߋqcŬۈdFŘ@QTӉ8!bEws#vlߙxaYVWv˚ll,ůj*ׂYe[w z`ZI|txK̓K;"h$O6J1NurӐpOrR bX0ja);Zƈiln֡H,ᦳ^:H9-й ᧀG:ʺTqɨ19MpMC)Yi6@ΊU3|am3i4T FP-xޣ~]M7-~Hz9奵!-9U ,v?:|bNR6-֥\_B.n6iZnzŮI/>/˳.6g/c'{9NL77W/_/ih~9c[z=q޶˳/tΩxzͭBnLf6m[/6/mI=s^+^j{ N-W*ة@Hqs4N @@8H@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $Ng2íA 4ڣy&@@@b[@ J@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N2rLN x@ d6Aq=G'1'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N q:$>&'C4rqHsN Gq=G'CbxqH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8賭 ޭA8bl5no7l Vve}}  ɸNi>XFGc\Z7.#eĸK =r#sDt5Y֓?t(W^rԙMT_ڤ2ZtNJX6eɉif]b:Ƈk7^,m?C.1Nz2NǵB]y*F0ct?ů׫zg>;tfYCN^|'7aj~MZ.Wb>Yzzspvy~A?dukiFolXRzq/X-/ ޵qdٿ  Ƙ$ S"L>lk{o7IQ(RAdէ=uGTIz%?j&}o펷5lk_{6vA_BП•(Cj A:]Qk8mlj5m ] .Np p!]+DKE Q>uyF++hgF헖ϧ+D){uЕ=89]Kyb:SGCٶԽ=lOWOnzm~T*XIO b. `3s*$}]L(e /R"e\.|6GՋzqkI@k^ ~[OۼD|V\aIlԒ+'Z2ÜLzo +Mugՙ-Ku^\yĥj~hb7l}:Cɇߤ_鲾4B#&Ȓ܂!1C*Uzj3i ltg4DvEc ZNڮ10xQb ]!`%:CW;!vB}H"Sy 5EW3+thj;]q繧HWBݥ`|8(cǡխO8#Jzzt%t]!\)BWVzz9tp.+ `g *B}JIzuJ3e9]!௰LStp ]i m+DI{zte8t펺v&ՎhE Q* + R 팺o$9%4=B#y$\uZuu$ZsTq(MJr8]tǯʓ/E.Ҳ- =2;ve+ Ӧ"ෛCem^E|.؏U kš[F]M߷6sK*dZcY?eL5ҲOm+˜~LTRbK/-3/$u2-';NN>NiA R3`ݺE|22PH#-;ޔkm3-݉]-=FGd^`t%b ]!\IBWV>(wb՞^]qqkֺ3tpO<8HWBfy tnw JjNW۞^ ]Iɾƞ6EWw&wheADeOW/T-ؓHt.]!ZNWRK+v4;thm;]!ʖ}񠧫CWFQ!I +k;CWV~齓OWK+M]`CDg rBQ=]}3tElzZYʐV2n$i r-hb$cp ¶;x 7&o3W10p{O9mAgmp"TG$Z(ܱXEZ|:zY1b\guROy҉z=9*'y tq6 \9֛IڿVt# L<,H6:ƒ*O| )B" ܤ䬗wfi~5?چ7.ZNV*#EVyP*]XG(Q2jXҧx^0)y4-.q#qLۋ,u: h"@,0I6,RXӫbƹ\.&)f^M-VsSzez^Ӆ1/U0IMG(GEW}_˫:SS“u9s<㾪q۪M}m@:UQ`d沪-(E%hXM"-* Bsήzn]x!-f.? E/w.s< n|9],J~>umZ-Im0,ȍe *ʝNRXn3mS28w ,L@8(PN`FZqNy"]`njƶ\mINw`ABwXDt/\ֻTp5Q}mJ#Oɲ2 ηS :ZU]EuEՎo~5A+^h\6p,_oWMK ԀIVipu /B}Q[-/4Cש+7#c dFy -jpep%:m5tWg-'h  ƼUi5VGUB*q_Vl&B{(c2u"PgwAK!XŲN:Π{ )yvXQ;e=nch#yZ6 (SzWh=벑_?1̈qD8$,,YL+40}&P5gͣN;;}"-Wgo5}5f""rx<~1^i];g993g`c7Ck7&9/* 6pR9׃Y웑l,eSİx DL•ZF]eVLSA~~S?[^S29X)PdJe"hγRcyyj>A)*{ed%2pj/NiN\gSp>C۔;i& (=$3򏝙K5UWyvA)+eSG`\gƍ B 7QfsNAIy$y4*.ҸS5DC#K'H2KƤVX8ul:8h6)mb>cPGkc'FUc72_*Cqᦏ& 9(*#Ky>$\9Mt1x{[C h5f"Ou>ްm'0ȉQj#dIc 3'x$JEǯ^ƜR͹Va:nw4f!OѰw]-|3?_\RVKrYr ܸ̍[c;l߉Di-̈́ RxAZ2"R? @.2X^Thq\YmwJH#pškOUʺ#.s8B4 ϖMZ:NCnN$LҪw/,\{Gi=VKNRec9[dH&)CsL$P,l-8w˿FG7Zh``e[Hξ71&J%p7wx>,E>TkJ L -/{u@N$tLTRbKs8( "-+hcݛ;:v@j\Ӂɟ1g"M 175T-|zNk[_Gb5z9 ~bYyx)/U:ydzwLMJhi t2D'[E4qo |:g*EKZy{ձ;Rt6z]L;߸8V,SjeK;ˏ'_pbPƽ޻d@통y73z()obNl||"I}2LB}H=?~-px6g}pqxuYú[5Xmww=|/N$h9ݸԻZYWqMvdZRL-mTJVk~4|4Ӷ&3֞'ay{05VD>":'3ܪ=;'ȩLd8i4wSr))!jS^Pcp[wm#I{%@ nnpA/6dǙHYEI[63@[lEꪧ(K$E΢DT*+&Nm+PѠK`dqŃN%Vj'"\ͣJtCh8]I7Tz yGW?x3%?v=~3<㢓u5N}|@e~E9,/Ng9opQsOg.ď6$b28Y }esS $kc<ِQRl~Ega6P!w6Օ˰fY?T;M8rt5fΧYGV8jfxn7\ym0Nw=;qwχy4ȇmߙWxUK2'mlozjsٓh,.g_>:ErS-l Z2SǛ|fkcrn=3Dڞ)҆זvYZwg|Q WR%Pn*G!fM' qWw_3e}VOz*"ӗ Ng$^D!q:M#&:W4 Qe9`R/2n85K)*ư_6p! K%xVfvewnA1\$}BmC2q.j]D"6206R]cR苽cSUpsreuKvz9_Gw ۜ=nrՅ0kݐG7 8Lq&g=\rОm@9?-MQ-~"% ||9g^?w&xQgӫ+<.5ZJQ*}Wr U_Ͱ&(o}#RjyHV3|_V3#+#kuH©;)CkoVjMηRF-kN⹏/Fuw?2E)Wv'L).~[{{uE#;ѿ$~i.[]f/E0k}:Y֚_֊zеrI;{;;nԨW-Mv"~Zia+a%ب;[Zz#aa}:s(}omCV?DpS5Mi$HީOZ4]O?{|+O$-3i5^J?)(rP.Ke_V]0%~bx0st.N,B7r$o.\r;m'l|XxbLTs@d8&^,N4i2t_iOslhe g!JD3hͱњɵ+g^x -͝S1l!Jt"@fQ$S`cG (!x^&x>Y mL˖-pF| Rą& Uۺ(JHN[r[0HF3ЧWI͍#IyaeΎt/Vz{@&~O:j{G9į>qYIXo0zs]m(cl𖇼3~:G?Og_%!ԣx#)*DWupB('Oy1qw|"˵ٌ 9sɮ?ߵc8XMx91>clco:0N ^[ǚMb Z)!|u%/ 8fcIWHeJ)n%'fht"GVܑ#$=rdE?YpNHRISnd2Q :h  W7.Dz:Sȡoc\^4u|6OXQ[QBH+8'Hڋy9%:%\^h i.UQ.9VW(+g+ ~6n2` gajQ׾voZзR5V^T Gt:*pzHDHU#|.*ߣ*ɧ5r>.sw@u0>ifW5;q(6-eQlyB9St0,SK<"[@]^ޙPbo$/|:|Lj~v9y&ÄIq]G:pW sg+bH*OQJ=ay] l^$|(̗_7!sO4 .(F,h1DRL1BID3ha8ݾ6iI O|J,i Gg [e9/9c~ͩ$ꈮa!(S`.78Q\eIɉϥaR1RHB[q*D&FO,u8( GSbJAJav52Ԡj HdY`2X$:xC<Ŵ3H8 xyJuj~D b e1kLXW%zUOju듎1HʩuQ0tQro!fG-Ea"Fyc8itG:6>'esګN$ V!L@%ÁKY%_I[_Iy(UcB:c*! WR5l* mRs"yܣOO:O{:O{A֙ͬ(8 B'b9(cޒmd1sKPӐX D ]mX$1y Oo/W 1.'N;Mcp1Ξ a7$p7rRmeXnI`$ Ϟ׆Ն&>,'hz$Cw* "i#4HJ7Z ha omW$2Qer|ۻ&7nj0hmBY(WW|J6eht8>?[F&y! 2+-u9m,V%ׯqb]vItղL1cFCMG6ek[#ڲPy GiʦQuMgsOW; U]H6ՇYW֕'@!sdȳڴ]Ӵ"O25;PVMGʓ֕Ǿ'|4T1bģو61[#ބR5ĴT@jcˮ@h[eQ-꺬#gJY<7n >0l\jm-8zyLk]Dh87-wu^ۓ+Σzً.{1{u _KQ2|dhޓꨪ; 7`tSWS^ŇG?̇|ͧUF~w50)A# .g?N(hG۵9/_-#>7ׯM3ȢE1v"#wVhTSGX뒧Kɐcp{Żei_Rl{5wM ٿ+ǻ]gw_puPeou#z..V"frqeit!~yy}ra#/=qvO|3c%1Ä]y92Tƍn>1(v@ bWg{RCB*(SJمWؖ. J; P*uChM ¶PjL}i ֺw2!]bw)agl4N5h%et8!mT%Oz#Fw%BY{3|5^ {@-./[{>»z샌[9=_W}U~Oz&Vw u :`EДO%񗮩J'V*2G9y۶5}CFSYL6o&KWU >Fź"_MTa ,-BtHZ#<1։HJwʫyvyQ(:E6GZEZLBU2z\J<* xBl7 xU.!] qJhŮ+t.j"؄t%&+:] x 銁`2'^ҕ"Į+YWJT!^л wn_leS>fR2ջesu6Gϱ(blw&r7&yt?hDBjPjW'4̄j؞41ۺX7o۫5Vc7?2 ܔͧe-cؼ9Y76h{gy{SԳnޙ{=ϩw۝>ʛ&SwrE嚦hLg;)[('wL]RhkPtd/6]pA!& ./>OFEl|ux1s5?Y8MB3eRo_^y^]]p/^{ @ g:{>Mݽ1O=M4ifg(kvGt=!xlBb`0] ^WB鳮+4z(JpJEWLk]WB .jB3 銁ip"oHbZ j"tEWHFW *+5GWB9x4"j>`]OŌѕdt%bS9iERtB2\LFWBk YW3ԕ&$]yJ3\Sѕк'B RWA;M)%2ȝPJpuHEWBKGWBdѕٳͩ2u5 8'p8ک&(}d;2YWzTϺ*~]K\YGyhlgGQ)|Hf5 vŠbLW-6LQC+S`ʶE-VW>\u.g>thϋjl5$ _.._5'^_V(7v]Ū6oy_m?rn~o wcY܍q,ٮB~E%TT%U_vՐm}'>‡GC/Ƨ>S)Kd| jvL[F o02}l[t98!!]1pJ\Jh]WBۭ٬'ѕWҕAY]1-+u5C]k J&+ i>J()'28=lk:3بjӔL&3ow:?u1ݩai}K=Ymdm2j`S1Tvfxf54px*%}v=3NM5 XO^ux;qvŤq>[u8bvyvЮ&BӣQdt%Sgu.v] eYW3ԕvxLHW t*bZz וPju5C]Ѧ+&tJpMEWLkĮ+49Г*!]1|Ց!] -F+yj"m 銁+Lօu%>GWsԕ MHWә nH&bZ3(&GWsԕ \QBr)qA%] -u%g]PWANHWL:] OFWL]WB uz:U>j0õ?76L(J=25 ]]QՃ{E^|oװjA~%@#/r?B3El cXEٿݠr&0Cptb cc0ੰw)M887@*Ҡu%d]PWFt% EW;(hrm׸'BXO8\R}t%:j"M[FpJpu2DŽvj('d]GWVS t2b\m*RjT?G]9 דk5+ bו3st5K]y1IFW)R*dI[d]GWx>!] 0] K&Ah}kWBrtgV]b2a.Wy.-3('T*a8txaSߚ%Fц7D,)jK]^=5"D]n<yհÔF J @w>÷ 4& O'6ܐLlȴtt=:s &4z\Bb` JFW;8`cSFIՓvmHFW]*bؕPוPƶ%oՓ ,] Jp+ ҩ9(K)b`&] Mf2(Į+4`f+Tbxb`ml2\ Jh]ѕPzu5C]9KR;MΠN8@) YW3ԕ.$WRd+uO;sԕ;)]10a:KkU*ZݠUUr{vGhSYܞ>2b "NV g$vpQ~}ẉDhA|edGc11C'[Jp +&v] 7YW3ԕ@J%+v:] ԹݣhTQj̺ ϬOHW IFWM*2ުu%gɺz]aPڧ4d`(]1.*D?JrYW3!{X8\RІuŔFe]QW6R8nWZ+f+ %+RHEWBKѯ] Ϻ"8Yj]WBirՏ+gSk F=ut5'~7M&:Td#t峮[yu=1<Ԩd{= ϏRaTsلeԈ qq}EgNCp֐ n쌁L23qV>;AI<;L7 8XIT d]= 0!]10*ѕ]WB>jB͓>dv $+EE] ϺHSZb`JpGg%jeJHW N"dTJYWsU X}9@51/?-_Vſz'yv;Ż =nݳ//Hnj]P,UUW4AIzxhWP붻hk}}_&|wQ(oE?\tŪ;ݚg7붨z}u>bZ]n~|ϼsmS IN8!ؐwWEߴ{UxUѝnO?tYj]sp\;cV9=zl-|u]W盫grSЫo_;כwx{,?poM6/qsbĢ8zsZ-!P?|\$|yykV-|~vՋʟZ,>W_~Otǀ9|VZb(./e[8 .>~6w+]jp̈qq+#1߃?n>ӒBU޻WQtݦ=󢏼UjsT T)޵#bifq:f0YSA_m9Eg?$[[7)6ݙ[[Ubb] (ڭgjɑV)US}k'*(jioZ*$޲'U mBqX X BmBQ, GrJ2rOΧtVR|>kWynZPqTGW?6O,ڕq(ce3%(KY.8䱌ڎ%:_g ^I/_j5zI|:Zz08Y c~J,8Wi%p^Լͦ>LU&C$8P3ߏ2IAh9 ^8\LO+yL"&yāXL@D&փ*"A5t%H) $ &J`>lZj5_Zʳae/7{l@ (!wpӘǤ-5&%D8hPB&jJD D 8="PRzilYv>UFυWwn"bJ0剈$Gv+yZ5𪪩C B*-:&>ę)(ɞ4tpl#. !0iD>87=1vAe$ӏ,*BҕȾ׾7Ϋ%ԙ'g"mUII\wM.g*L1'C(ʀjy]VEqk(Mi^.&_Ra[,ۮq4I8e4J,3eBV& wq++27Z@Y^|skmָbn֟NGqmF(ObHr,AqLy)y΃Y $nF$'Fj] M dH a`>b֚A`Ӫt:1#T1kF % yv.,^5>L @oK'=լ zG+3i,5@MқP^BU`kkTPvz!,u0#nC'ݤ7sQ!7d fQ"({ΐdS,J,b|K=ĬfMF)8npѽ|JGq5JĐ&YEDNYN0%f5+qYHZkY(C=B1k}!]X̲Ԭ:¸DntR5nJXIO9Vvts ]aC?a\J:X;U>GCV) K "ǩdY BXK9dR1\ P_p"֒/)>z5&q9GNA'fS=f EpnRF'vCz3\df1&Wt{H'f(!F8_Aysėi>НUes39-nb:-1>Js?+řT-Czka34E`jS23ر @8ˁO#۫o;PI&א U0/N{˯Q߽Oht9_F+Jeֳ;|8ىpPp̐?ڜOƩX'|Yi5Z^IT"J,A)2׷FuH$IOE/)p|K8pH5ou~Jyνq&x"E<^" C}4g,Xؚ"&RJFlZxʑCJ0Ր(ތI-"l pà@a "ދ{?! MJ TaCT%C \% q;xe%K|[|)ڶ=Gw@pCb(y۶' 66ECAgmsa~VO/q r m-oSmd,37TRPͦor ,>Ha$Y{6wf,}iVQAGW?6O󷛏Ook} 5< e(85CI&8:?8u[~yo '#V֓K-bUsNQ?~ nmnT4.K/MZ\?0r@[}Uqt)B8bZ@bD!sU#_dCDq|yzR{Hx5yϰjսϦ?FQKpRmFU߃J*m37x;Ay=Qbb1xm*H#?Wj{w5\yoŪ ?|!b&=}f}TqRi ql  3"c^pBd' xV˙ "(\ Tr0ЉOZ 4B۽Tkje!(ڶ25*jyƵ'i?%+Ov$^~~zCUٱ# }8~LUx{\l4db&M?z$8xiDT=DC7>u )1:維zZo޵6#Wg.`ݹ\ HQtґm;?O?Te[)S@G֙U0 -?1RV HOb`҅Ô b^Ȱ ]cIL \t tIT.*K'IY wPBǴH[}@P8P|*_WqɬrhU.mZN 3,F]OYTV8^EC ^b2`sl 1𘔅*JYh_w`Kyt:q[?Fiu7/ 9W N}J17#ۿsv^L+|lXVE ^WʲcøjOUHU$N H8ILaqg7sVH^x%͔J01,RPU@M cywfwwvO7(孖ңy aɰ\iJ+V_Sa:S+/u&ý%Z׷5P{MRν3) wve:،YPCB=Y 9{NnüaVĪHZQKRny@v64])KRt6&B>X_4I:xY=|>}fJhB*oidK%%Has)Sո1pqƙo*IX^ %V24Wt4I]%q+j> )6eA^1p> P%V7+aY^3jGgk2SIT` 0rMJA;Z7_e{Dgra+;~OsHo ^(yBlw_ WߔżxʞU1MSQ5hb\Ȥ6iG?ړLmkensw}4= wU-1b׫)(W 68Dn%X Cj:וn:v>*#lQƗȣ􎀾nȢ#ݐv/ZTHdg> !rdXZl1rnI?@BIB z.u>ؐܰ}ixEHiē*ٲ31F;lqjd}%!}Q `т\~.=~Fyhܚh:v*Wck^>eLU0ParCM05P1 Ѣb(բk@C:akƇ?QU;?;Oঃ0=6pRA$tX;tn6d8c Г':N>~~REՋ YUJť. xNe@cd6&eCf&.GbnMaqR>řϲ_\#H Y) E"y\K &omYk1f %&hO#o-~DsqQr:?$y/֥w?h R*Zq2ɽ0&=xk&-br~'@JX&]]<ǔ HF;[mg PYxA),(9E$w)ltnv8v{+=g_d*LZ%hx/]&  an\?Ϧ#1мYWn(ZfoY͠|ؠ,UۺNJJa\)GHJU'k-TpRhpQEHӎ̻)G ,.8W>Kiɺ(QOﭖA*ΓZ?|B;.4[XS=GsJH8VYbov%.`y-ZbMBo{oi?)dХ.yEϋiEOsǹ˭ 9~j`d͌YKI,c$sXc7|,S$-2x%xvnr-sY17nTJ(+:K,HpOɲtow|$l*A .KfIN *o9KբPCϒI* :J(˖ Oa w%l_roNm^(Ǽ:GO:a΄q{*Ü:xZ gm3Eս8xj gL s4E<'rsh|##} xez*ǺgɸFCrrc,CImUrG!=!L`y,_Ve-'96ۂYOk^bs\I3Y*+gaE3Aծ* -2/5#7W4" e}Uwt+z )l2Xk,Jݼ/w8Ewuz./?2B"u2,g$`|!-#L-j->"^V}Aa2ੲ weP zBWP:JrHE29nZx)3bkHIqCG qELF[.ܻCzon8yVNL Xg}O;~ j ôzBs}Ryhd0M% "4ʛPwk7JBGNdӐx Ito_/ -4h澇Qx>."gM`.‘ #GՒpΧY4ƣLJ8x$K@(|k3F 4v4ECI+M;jm< uΗr@Ho+b0(½312 ۰Xf70gCcשa8Y&zeI ٯ1 cKɑf!/A(1pqfs3)X;FdIH4"gBUwޗ5ia[L&SW-p0gyk/۬`3r[ǔG 4rGCތK+N"׽.6@+u{J;|S-ms)F֕"P/X:6FMc呫 f/͜"*rEuhޜoҿ$2Jmn8H  '$2$irGyзP$5׫I4|V> pz⍾(:W&x$M'z8W⹍+4Wr{O+u$cfR We^e|teqRaE*ߺydDe&|%X_՜beѹ]Qz9_̋)_Ń|T >dsZ1e=7_mn:E: <Ɩf]n/|b֧9`>]* &"7%ȳ!^=KQwH`r۷)x5;1*a7r^ D4/#oJ-R5蠾LLd,s,=djrFc9mz`1ré$ϞQ@Tbo,;6cDSќiCl!%,[E>֤L:F 8F2i+MK@c4u%AEi>5 @ W x9n%X_ NQ9IR×h?< t\SkaEA@c.jw;Ė8R8c)#4庾̨QnIr()7M\*QƸ.Uθ}SY1)4uF$~`$g k{){+h1-*=bzh?$\{spG--RcP+[l?T1.B@iHWمDݹM{o_b;铷i¼$$ևso TvOv] y= Z]x/EI㽢pƒ¤O4cA${=BuoB APқKwq"<#ƫ&q;K|<&hVqYQfVC6n묌*Q5}h`+)}eOE;KFuxTݾFZ\u`Tz#k$+k< Fߧ'_*Qyv)=d29IQm3j nWFAt.z:D{6G+׫OK!Uud j}/l 'qfYis˓e&aJF"&r}dM&xvMݶ)hQ)UUbq CjoMxjxs2wn^m6kI pSoCնwr;]~~i8_ XdF`r;h>[5>rLjS sg(s@c`9,̘Q^x/|2l[& ulFЯI?nW$e7Bݎ9gb_#GQBB%j5$a*MѺȾ6wJ0TJ0A ͆"w:&4۱X|]VV@ccW~}Y<1060Ac@:Nzؼx Kr{F1<ᴣ`0 heQQzvC{B.I8B5*U;!44Tg&ҞݎQqLZ tY\%>:Hy^6 _e$5ʳ[t#xv[s#;(@cT05NY(e@Ҙψ6PaD[IBp;^%A=|j6#0 9o;$"zNVQ\JH(G۶qZl\pW&;?{[WI,EG9[LF438HHWm3sDTGaS)I )WL G8Xf4eT@c( 6pMs% G#]EÇh ePadjk ˄1ogZRJb@"40>7%8 L Z;!CEU,=j܄j3 9M$G̗p,Z/kzDq͊ f R2NĤ!qKgH4Nqo4'M|xY|7O6CX(TET?(q"$J)\ƻ. ee@(R,1бHDJu~-a* #hߥA d"!R[WUT@Ee)"*-R^##rGsXHr&U=\ 6wim7!mj-֪hؠ HQcc a_] Qhaebv<#zuÈ Q)?4-uRNj`Ϳ/"_R<:4Vx\]5ФΣn@]w臁kc4pLcNc!BPX<'sUNjߧUU\9&Bh;.SnMDMmM":,OwVE66vŇS;HKoьYt(E\ (MNAԪ=eӜؖ;hm-8Br`ϫ97SiFD qoP;>JKc!\Gp_YM֨}PUJ+cOLfXFYqG i7Ci[ɛ) -*JΞuj]{APfēGnjn"aQ6d*^[2dsT$^\ؽd c3G]&QMD>,BKu6b\ա_@cAފVα9kڡ a Pi5h *ח0[v4K(7Tfp4mڱd(7'| uYV2! ?!A"cEjq,nT[p$kPzFW0EYi#eqߓ!}#0F|f&!8Z=/Y M U%1YV[c@Mkz6dq[%40m!i91n+1Όŕc6C|deH@0hJ62ޝF ~VH TdgiԿ">5LIoH|" ,;M \t3v!)^Ḿ (U>ÏsD(i*,x%LwrzF~F94`6иH~J_Ip- =+oUimtbm )A'v[{ [nOt܈"4rP|D|' \,Л0" _M۠M5T6ޫK׃e'5" CL赁|L!"XhΈR߱*V(t3dZM[._ʈ}u},'j1,2Ԥ|zc1U/YQƳq$NIU;㾋P yF]H8˪@nX6TG)=wu8Kԁ)?.i#v HK@%s@*̵jZ՘]qL5aXW Hos5vQ$GP!(/c!*>сSo\l}g RvUy>9Zd#LrŢhor}V2>="zNVQD6QE=s @qFܝ"/Rn.4w&fGT5-CqzF1wևR9eiqN)c&Xҙ;o [DQ^}?[XqQ'kq Ă%v Ag{7Н#K3Ygkn9{k Ӫ\w{̂y<)ZB_'\M (R1H HbI(c{?/d<*y- 1f@ۇ+Ђ͂c==P $\{o-[&)byH7VBR}g.3³:._0+WwmpCY1,ikD1?Xd Nj!N{=ǃa!V0  Дp0$ƔFq!!S*kʿU3 D ~]&p7.e1^ +J{:&Pʗo4ҧ?qdt7ZcqHRH$@A 0XX*dIsǗi;`Sre|`E#.#|c>#ikwor WwڤERsJ!r}}UQ2xwL^KbKZA\r)\ֻC%RlZp[l[@O1wG i@c{?g[Le;vu5'v4-;AvA]{N BoE &l_.oeWW1#^5N'J@.+?8y|>MG>I"ӍRw|"[QqBw0}P 1'K2)MW1ȧϧբ(2}c8/r5]ŏI0?j?|(rt^{}"^EpJIW0Z _Qvz,٪iſ%\ݿ#ʇ{l:0/e%Id[|S}GlO/FiZ\!dG$#~*OE}LG!A}eZ uyD鮜Bt|dxő1dDL;B:a6TGaA""X.Eesh{3c3 ۜKK '^3 t4!@n>Y'$۶xG}6?\1:;K4$1vF2)3K1y ec}d\<-SʈyRWӔ5_Շo7wr}ծOS\_/g7''i ?OmUFHViQj-Yl!bPBhX.u2Ϙ0 #ts'1aa]Mp$)H<ܝ<d:;15d;i5W'_g*XV!Xϣ9=P R=lƒ3+}KV MS|7kILY(e@Ҙj͋Կp#QVR&ˡ_u#ڑ׮RDIĸG67BOn#\^'  i iUh1-wemXy4s_܇;I1}ѹwd-GI^F9bXbV*e;uXó;FxR%b&tmQCW/؟w)_qdOtF@Mtx/F6HeT).fbd[W?RFS0o IVrMրZR|i>nAGͽ@&DLRpݽoOG{hj\Hp#_~IJ8-zF*plh3/+7QD(G0ڏ*|8Z ,{{}n[ 67P8깂)?n#}Q9ŤLŏCq]UcBKC|XwE*5O:ԕR_16Ji*~_i|/+0W Z"#dU Qx.fB,!JQFK?-S{2ٷ-1)N (s(8,Z8Z5̌8PpBuƹȄ^EB -v5J`uG\#h"pzW[ #.,JFqR+y{Դʤk +K .pn `݊蕌k ֏aES( 8R^'~Ygch$w2d Tր;:_2c[(p|\p283ьR ]+es!,OuXV6/1zE4O79Od_!@N3^ eLBO/ד}oˆ r_82SǗWW581qi* ^G28h2"奚?p^-'cFpP1!!D&?`\xz Wxuk@Ėy)iޮbw׼3l֪n`h)e4aj;bzvTx==<`Ąhs5n4C='Q:c ǕN 7q "5=SFƖL"ؗ`{\Yv[Xk l- r~lF{yTxNϮ"کɼF`O\P^i+M59Md*Vȩ_6vV)J,Mi%Vcϻ٥ ՞BA&/UG)7teub~SN]+Q# bs(:C+p\}+0*}%tn< )P؅(P踜dbHJ5A;^aK9sbЪ3O(Km_Fu,{jʐ'fV295.N(W#yUt59cILXysjx>tֈKFQTxu15.oͳl1o`A+? v|+w` ?~kEu?7XLܟ{K R/K(5R.5(/tJ((ͥ"W6E.EƶXgΔ ukIVTPT PE7NbXBg`ZzaTpTj/l $M,G"V^{c81z,k LJ9.3M]ԉpq BUr 5k 21x.87ubo˧|S it8LfEl(3Z .$Re&SuQq|D((ő”~"7($`XUlH n3 +(.ùVe~ p:oTgݣ< %5[KOsF3XăaGjjxE2*htRp%vɸ\(#C#`  /zGMҶ #1x'j/m>*d/tqG.5qLu92YP(^dL\{8$<\40+nvSq[D>(CM:޶3 RHаB I7$;6C$sTSndhk~mކR+͑w bGYF1B_s6YNcuޢl9˗jrP!1|{4ZXMnF#w3*S?@K5<a"X_Z_Kh0&g /Mh5C|:hM`>úχT"L_)&~3-uOfËfvYvnPN.-o5ʍVm-㔱b>K9j%3 vn-t4fnq?ꡳ&j+ҝ;Fl{RL:]mP~?fCWw8 8Q܄~ 0r+d>)Et{kQw]]"|OS*o݊{w2h2N=ѡ]7R'#'bϠCxXgu2t4䦌lZwȹr5Rd+dErp )4Nie*&\Js O٫T_J8]Hm#I(V dY$Q FTe}° dfv/lvǙ^̙@<EqF*\xA8*?!3n·c$i}8%س*1P2s)C'h[wI/iBG('$èlx>^zeQ^C/-9rbDJ^#`\eYK͘C1N(,WnԆ"q BsV-J]c-?G*,)aKqTAcEQhg K拋bu !(R7ZI"=-lӌ-ͳZoѧ|,^O&DAT!b'2v]]p @&H}Irx #ڒS9ixGnUپ7%8\F&qO?c9EES]H(?&pDwRm2eQ&{bSNd< UTswZQD</Lг7E=cUȍ/'5c \dҮF'sjM'L $r D0dl9s0)dTdksi\徲0Qekk+cRA՗m79h|0Ok[F Âl@f53|ڍǾ.OVS._|5 @+TxPȮi2=YmeÉ~)6zfp ŰNU'T2*P]OȨ.+wy2ǟ@!=^zL)IX:O% +JMsZRw}1*Jzԧ 2H*z;e۹ &SD0C֓h+&eɘZ3K1e#zῌҍO/ד}o뛩y3x/ףưsyaѽǿ0ִ7ZpT={$, CO;OuQ>%,-PLҞ= tuIIzVשnl53j:68uNOآz.HEπ ĩUq|TbOC;:SUsH2zݑJ"u4TA!uuAϽl0}>`Re:>^ vt@zzm8|^6/IZۆFzyGQԴY,QP#>qKt”>8q{,pB6(17T] Z1EP^z?A2Cnؚh!"ZȖԆ}4g< ?,޿d% "1_/]F^%+Rn2lt-/G#yd5#./5TEm+3zejv$ {XgCAN08鍸%GE V[Z%w6޸DAefkt$JE&5hj,LdzP2'.dY,7bt3-7"SBwӍ^IĶ`E{w<\,/Ã^^W2C=8.q<I-M$z^.MWL]h׶ U3?&y ^Mp[Cy~ L?e* q`;Gt;}JkhUyCĪ:p^{_}KhsT2PSP A.58WR9msH`Lk((ïbs! t3Y(2a[d 1UktȪ!58n5 Bs^nP1; 2;| Q$܋QB`Y%Jĵf/TMd{L@:Chu3={|q60W]eYR" C)Z(dAQ2DExW0+!.Ѣb\-\M*9 ZIEř1uq}jh}_4P1KBp&@ M zr)(B}j b'_1b WHL~4"jb1hʸLE;1)ll! Q֏eY]%2w|lP\C, mJ'Td"$o[93 $.:ƄVR#"\ܖ)YiqTA o`V^YtbA[Cw?(67 -³ Jk}PBE !y(K("&bkUUsMRILq(Z_0rKU(VI1i (FIߨ ' Rj4 DSna!1}03׈nqKǕg\`ނG\@uDݛ7`\Xt8x ;*^0&(*:6F?Ybp] ҂N %1L0(:JgE4{}-l o>v gB 3)_BchǛxg/Ÿ|lcql.ֳzj}`ZO u;' ;KM]Vɂ識  H\:=umo~}k/Ym~9ĖB|8;ߌkQױk%@>*6-<FT 0 Rgk}Agkc'`IJӠnۋV_!M阹d& LPʳ=8Xed!0hljއLx.0ҹ+3nNd ӱ̓ y?աF9\lA[ mka}TY)]ueu@~aKEP91Q0r0ES '#v@A@TQ^ fă꛻u^Yuo_Iʮ̦{0Owctnc«EC>b9h& nICI#1f'gx-y;xvǏG.t;9lv֦p:%ھ(b;^Z1GMhxxW<I/mcJʓ<(Gɓ[/tm*<zaL:x*K]\Ρ␔%R8U~iDic)IR~;{=C*Y0V{k_WҞ&tжRIzY gȖDH'w$ᚠrlm\ yO <ĦcV{jyW a4]oo@{?b{KWVr5w~8YnPlK۷)^w?kf@@9qԊ(O$bGPլ ف5^%ɣU@yҌMMht}C:P-r+6Rr?!)1Sr˷ G %? ;TnSW'A[Ot§[k`?ju8'6wI~8 q[1+_-yz:GgoW#cqVS.糷C^4 Nۆ7@KD%`<qL㪯RkJ]2ly!W/~}ލU_jUp#mVoߛcߜrU??ONEsBrV-Y'u/sQGrUw=W?3_ON5xkih HmվyC-Ā*|.`,S{:}A~{v"3uATd9p$zÞY*QLb402-Trjf7eN!m-XO8n>ʸ?P0T BkdgKf-e&r*~b;m]5F'+U|ё]Bb`kmgj6Z ]QonHlO-J1؈83H."LX@dBq"Q൥\N%9ȕɉاhL54.@JQ4HzR;74/QlsC- 8Sr_>jn+h/:h΢)j:A(u]\7Ƴ+%pvr(Cxjy\} ЁơlQCz m|Ek36fta280,C)ؔY8jo4\$Х}vGJ'É:07pݥ9+S[1xHA XȬvF~ыHZPhjM~Dcq{q9!$]>=L(]>k: +`B>&EUN<-ճ'WBYDPUCI˜9++{dݴ/ 9xX !P4RVYomk;,fPD +XZ6B1㌦NƤX*b+kMBr@#߸d模O⨉Dbz=UB`W-+WKYmu4Ie@ M Y.U UL2vհ-)xkɸAOþTVK켴;J"fD6Os}[;>ΰj(x\|yHG뛆z|A㘽wW+7{i͎ޱxٝUeVWEbs(pF TxfUV6 g-Stf(6 5{ ѧ8D;P~ڷZ5jֵf]jֵfzU^mDc 4-eޚ<ˬY?{׶Ǒ\Ev-Ya?: H+Pׯ)؃P ђ(F;++󜬼aRRD -[i޼O%:DNZz0/t_47g 5vTgPm Bs xNNvx$AF;<\mW QP>wO%5=noJ2YH_"rTe0d?=q5Kdv$' -NFrͮL mUƒ?~( j۪iTuĽ K+87:ᲦTp?x~P4i~[#@ea|iɑIix.hnHa߃0^^F\hg x{5 tTw>*ᘵ7];:Kc]\{ȃ8%kkf`#^N=8c$<}v ҍVo-Zlp-net-H-d_-ijSCs W;6ϔs_#\dY z8Po=g8E}u:^@чw+Y7fyJpcˎ oWɓه~*|o* ?p\r =Gs!9dovcϓ.Kb7nm|uj7ֺM?q4Mo3[wC!EGNfFݍ:g.:}*3ҊC[S<Ѝ8a#uN|rOlXk4:@bNŀ_=kR["akbuTJb-E?V$Z\ eJ..5L)HW; ryYBGח=>GSՅzcs%`hswy +*1Ӊǚ!{ jsH61`D&6D+#Eop'``=C GRds'4L"Ċo~Ms&@2c莘%>3fӚ%][JKzSSϺ03̙IfL2sfٝh唡52'*E+sqvwqOIU%0H;8'fQ4PJ@F~x0Z0hڀ,۪Ra.;2|P)b+jbfc36vYǰɦ'ǵzuɀ pM~l7J< g,dtr7df-Jj&GIT'ŏ 3dPG,!N'%VAτUPI'Ǡ?{seIs9 hMTP2'.¾e6SDB9Gkj -ӈä޻ mmȳ::PK&ʦ l*;5Vaf}fԃ[{[K!꜕8`%м%/*VQ-࣭"TFm>X?{ͱZGAFCĿeTɳ 4iDDcW(M/rxtY7J1W߀ ~}S֩'&c)K6O+B8ܜś^gG]X-,ܰbON9͇+\)e'šׯ^XٌǒW[|}{yњdy`>YTϿ_bb-9%;FɖL l'X;eGozc`x7..)n?J#6NoٖH{3R-Ƕ4 7ڤ6X S\sXYu-GPfߚkUj`>b >0p dЇVGƟ˔FyfFM:Lv D^"vlPAs2O\GnzM3v.WB(riΟ4{+x'u检 sx\rWܭCr %LeYx,EPc q% Z#GW5YJn{Jp z\c[)U63ٟlf?ۢV 8|$G-o!Q1G6$(&1T@+IVg䠂$R8Y~hetkd3} .`HiF%3~]]jD̠HH,^zQa@IPCxN^T)7C {zSoYjGKՅa żG̵+jmfm3[oz^W6^#9&xOg_rR_e?ꀎn1>܌9 `6-:odˌ1?/m5:PP*B eTJ|r(NTiڃֶg?.CսY1<){o6rl jsiT9+L/D#.ϵ@<؃ >1mwAO6!ͫn GXF+lGkA\7Ӡ; ~gW}aK{gfP{v4TuǷ{uq\A4 SأS*|(aڹ`F^Fw>̋xQckG1(|?^즺a=:: ϤҐ:*I4AF1?\5`|f$r%_MB:>Z]u7{O[:Ei9`Poj187)ذq\gY޾7Wڿ]V8ac.~mͣy4JGw@%_Nrt 5BÖJ@cp|'ҚA5&S2zp]f B_|={_zL&&SpfFO5^NsdW뾧6`SN;-/XcTEP5gw<K`lv* G˓l!-rȾ*OyDbIח|%s5YȅxH:(ґ)P# crDPq_1Z˽ڒ:"@5pB1[ʱbOwвbqIƽjyf @S mn ?S}[Vw܈d'|뺃_w퀣s.Z38AI=ZYnByדBbE;tƆm[ ޺#ny&M 6LGqm\qe[l7(b⏹.]1mW{EZ"s(JqcPe߽OkD8sB:][o\n+U"Q/y- @Їu'si/2qf^ypfɏ/Ibʡ &h`XCXbHb)+IP k g)ӕۅvaSM wowvJ+^‡e2F2q5پ$uZ7dQ:OM,g-!V{f]4ڣY|#.ȑ<\JTED ;?#N3>U olso>rɐGk#`oQDc&l(@"9h-T9\*&rO^ɾk`>}zCFN|LJi!Q{G⨵<$H ||)z^7]~ZBܹJ/}~K~Om˽aT.XpKw%9eC<ݔ#A-F4fKnb:q% _x*66-`x$?l,K)+7}~Fx^M _vq~yT#o!KۣnC{rN4":>A8)iJ4Z[6ddl%aE-PLC6DVhkA;]'ѥ/4lJ)'Zqn8M+$b[9.톐 s2@vW[sQYͯNkZ o`v TQ0ݑ[,%@!J:Bm'.9 ĐrבG &WmDAk?kp>sgg2%/2qBf:*xT\T5'Ũ&Ʉ6@)Fl!+S<(-esDv;qPar ǪVd=^']5L?߽{Hڣ'#]=$<S /.(C!p5!oIV8AdpktnI%d6C'E\Φi –rF8kp֧PlL5"Th#KjeEo2hok+<),O?G籅D/~L$c;Gbb`b[3w5;iZjՐǣivx;jL~ׄ\ӾjvE[q8C^v;tPmz?CG݉EUDfj3AZE|lٖzs[Ay$3sfNx~`fc%_y˾ղ/nmcJ!Ej8?ۛW$ad9G*fM2 pBmS!6ZjVRnvq!q䘨7A3YMh'*Bς֥TfX-eKiCi]+pSFLtG!L-bLގO1!9O1tcwJA 8\>w2R*ˁ1C][ %5Sh !y1.ނ2Oɧ;{<5=l&[1nB.)(+{9spZ!F9NThP1-7c =J@w C]ў]⡽` G-eC\Qb8 ً̋959 )fq$<_OʿAv$Q9WC;' Kgp2;_rah(^;\oR-*>aFQ _A5$[Gd0cLfCQ 7'cB&U 2"mtbfLHY Ӥa)}?k_<ۙ-Øi >2fkјC6b%S1.A;xꇌgfE"%Ze N )'<r *,=!-dK9YolnA{ws^y<@P iZJ0 &{Nׄ=笖k?VsΈ?||5fH=؉f G߿Χ\U)o}2l?gSIcDdͧ׬à e73&H|~!wzxФ{J#=n9BNh2f{aY#"$|Mƅ)7 q)C|@!F}x[lcbf|K0FX!Sp 5z)E6EDǎAJٴE6!T=Tb;{p1'(s;?[q8*G%,)‡ǃT(BU89s s~Rylm1640RIene/a4G+ի!F,.ɖd !owtRS ඍ SV qQ2,JO#E=Qp'[{]usd>;NH0րIIhKs& N3@f~WԒq 2c4[1'I!z u 2zJ&IMt+sZE ;lų}IZyd6. :+bzf"uw<(Y_C`J HzSBޞ3ωo=%dެzoUȨSGcq[ه褪 1[.!*s͛ft—.ے.o<LF|naN񲟵igۚዎ?~Y7sӬ9ߣBrSAJ<\%u̲5;{K5[5RGNgoPMOf[= qFL$lf}hf!lbvĝqtA4H\)*0fmѲ|CšΔ~JNm5̣1A;DA%l_9~6sTLsdbNy۠[~{աmcR}:B%xQ8A }n^f~v{bH9Rp}MI|[ ζB桏 (7۠a~ݠOOgsc*ŜD1gA -ʷUա{[q2MK2oLLiu۠O<2m? Pڼv{b.U"%DJy6h,ߒi>A_6xJnvEO:ے$#A }3kXMr0%i-CvR R`ю)n{;vA?0xcT cRgGOvo1#d&]P"#P:;_ib)!1dnL(!> XCF"9 w{-$UdLK݂ǫ}S혝;ןqR09\dQeťs1r m} jhG -)Z}sZZM3"p5_ThiMSp]ぇ1Dz3[/޼:2y,$} nOHE:!$| <- 1%^j`DG^ƦJ)zy@0q1.oN .B՗=z_0fl¬QAw71E%s힕bK>jLf+vVfwLCs8ΥXK%0zz0(ČL3bwHb&AaWb?Z x i'3ظU0ʕ2֪V5Źi 1zME%\ [Jn@1-@Zl?s]9t q)9&Prb2@4Mvg*^J*> *n fa$[U2Yl8(گٙDjUfV0*!QvXk3,l뻊ȴ\K ^>ˈ q#gQF8kr_rwm AC*ۻ{Hmnqfnk,n\fϕ9{6 Rܟj܀2~݄eKYL ּEo("  l>Ɠe\ ԆDV8UV0J)"y:j(hunQ7[QGU%m(/N]3+k+P8F‚Uj'yP^8Md:b.R 7*EU{+iK(B;-334$ec8?3fȦ`DX3aQV12c"#^{N'Kdc=0IO*UzvĒ"̗Lb'͉rLEFX2_X$[Tk~:vfmy# XtIJF*.zs:›[qd6)# d*d'y99BՌh{C:)uX ɚQaZQJT% Kt)/{LF7JdXSLTkT] :j> (!bEl'Iy+mdiJ3`ۀ*!"4dAwW] w%̗+7(m>rnr|Γ{Y"؄8Ke'(Z**WAN1S.JQd]Km^Nx>mNx~*[Kj,XlJ)٤VZ)^8 R  @,达tJsn4n?¢TmYu ?r%Cxn)b#HKրKcp~ Xn3NVz)J+WvÜ Eţ⏬ѨS\]@ D d`+蕽:W ar&`X*V,(KLd*`O<;.xZe% 2"/[0=VCt25듥.It{*u 1T^Be= 0 [Fq}w8,ÄuE[WhAq=<A+v 0Գ6DOQ!{޳%ZűݯQ6[`/I[G_.=oˆ`]adMax{H5_Uf\yXՖ B~a7ksweN2A8Rƙ8|Ȼ#9nL[Yаv#GBfi4@@^Cwu$D఻|Ȼ2~v@ bPv-$:+@ӭ9:Խu1-`|k L.rY6]kӱ8ٮiY{ ,7Ӵ j)ֳs2>ŗ$ڵRVhZߚڳr)]pؑ Z a{݉ݔ{H^$jyن5ŌͩB%卮xHZrMwۘjgp:uJm-':p̱j23НƫP='a6kE6fmar0pXͅMJe {^g3l$)jxŨ$UGAHr ubk>^.lobAα=鲠!$}!m;.u #WY2b.h]hg)ҩpFc!I4N8Vwܒ$$iNZ@0@BS%مNfiԚf. W5GNfŰzFN_iOўMqUi\"_=f^NG|?I1zᓝSM.È ?fh 4srRu) Mu(J˂]%|Yksҳ1@* ui4]%Pݚy>hfM/*3@V4N@m9rj)YFEylfG>,SE"X O(vdv Ks;[RNᦉ%5L.xcRͳ;DVkt˧Wt6LnS-*_X'& H>n9gHc+zNi9WRj@9JR6))Q棐|)TJh*Z(kԜ gVhhPY%ÊުOl1ѥ+S- 8![DQtJNspg)fyzZzz9ᾧ w-]O}usnJ~'V/gCD"@]~Kq] 8K}I=sWε 2h2Ю[@;fݢL#JwWV\ Q]vY9C 4sK瑵!6gU*v=,@Tv)7%tvj \mɉT Y2oJ vFc>A;y%MγV]vO>33Ўف*W\Rs"#F]v Ighơ'h:m:O Ĕ.CngjEsFt'_U+7#q&ϩn//Ͷ̾i3&K#Sї>N |bڒ72=p~(~1oƯ ́vYbfE?khxIE yYF,;͟5o9λF6}tOEl\/t" j m86vu@٫yZj0:@ZA`ŊJi+5ԑVZZCD@rKzѹ--ע67޽h;.ir 4CԗJ?w(LI]O'\!r|{^3967Q;,e%[TTX Pк9F*N™9i>3h 6ǖ~0vW1iYTTBs|dq G2R- o P :xX`13Y$y_opϋÌ ޗ~ra\"B\@;% * D(j}JĢ\\ތ "TX{]\sCu|H]L_U0|J˫҆MI΀  $S yFE^֏l!g/Sl-m$^e~1*BKhrgلlvhF)(ÿ?EaOl!o|j١ "J7LPzA"ߨρx$G&xdHn%T( t3 #92K=X'E.<L!GKKk_Tʴ5xznunX:7{zr wۄv$|%}̎dγ۷Tk ή" e JPBe/{}m B{B y⇟~b<(WS>ozv8:?r;){ ,[Ā50b}, myuZoLz|ytYl>탣''l/N,:lD \NX żd6qrqg?ob+:*٦8}!<ٴaO(=Ǻ5% coDVE  6Zɤs)4Vggf'˱e9%hok]9|y]_N5-凌I}Xay; mxWU/S1֐qgjZgXbiP+ĝqYo8|{7e70ǚQĒrn/I廽9ܡݕ.Hmq&|l=N]$>1ۋn@\l $:D#vkcL'x2d 66E A;Jw`+#sFe`J!ӡV;Cc2,R}d[?cH9JBݫ>u`1Qʲ pI A *\u[ZV8(B{9Lxv9L.o,+>w)l*Bj;4)xJ2s|+o;V4 ;"5$q0$yEf!ztgǔ4\ΪRQeU0LJW*.~,FSROK)OEu3R COԧc٧m?>,URGPm*Zh5ZൔD03Vt[Q}:E8GC_4}3Є5nX(N_ZǗN+/xZhd4 r?χ?}w<ڜ sMx`V+ٸ>8x H.ek mCu@ taOl聂M)y!XIRRJÞOu}AD4=Q MJ{%0D>`ɒ5i ֬R*}ejȮh#r5ٷ~tDG+ҡGL'Tqrcq~#I{߻K>,oݳ{]_%iI+RCUGҐ!ȐLyOC+_0*=U&=mr8 "t 8kx0a4ϡVzǹ$.uCrwl}dy8fω֯_9~;m4i$`ʨi5+AIT[=7KM㏾ {4Z8p/|%fMW FTYY|eF5JēVO[KkG=V^, MtDB:D^ 'ݝ?QI*  ^NA(gG9>*Xj (LAZ.'9TFX_SM>|j9Q;|fҠ]zGBFZf (V;|rFwVWdREM  G7э03rHn7Q9+Nc 3;ߜ_C}ѣ1L8sl h>jΥvu7}l-D&M> Զg X敉ÇHIM*)64*|>?Fn6O u|V]]`: ˨qڿ=|nrǏ'PN(ϲO W]]o[rDU.'3 ;Ɲ,7qG;vWc4J-n?n‰ihDS/M.?5Yw#P%o"I-z#}YP!¦+e+)T{[8m>Vf@?1vqo*`G~I{_h;kb7Kk:̫Z?!\L]}4 M.Wé(bq. TPꉴ:`MbtO[7!w1WB oGR,z=/WzP\勝] LlKwgDKO|{.Ӂ|3_WWC[ad'_ozӦ>>k3qJ|/0@  3)I譐Dţ'bzcu&*FP ŁΙev67k}p׮}kjrAU:ȵJZ5rl\@ qNR Q WD)gWX}vv*Id 7hfD"9 xpFR!IB(]h ?`2Qo"d$U%)|}Y֊.u+*BO ުl}oYe=Ȗ [փefqcjk |>^&TpB;AcnR.J;*bfkP1։ydkףxj2tB$/$Tᓺ+#SFZZE*R%⼒R %ތr-*r-]K F7z,B{M01PAQ Mo*qBv.XU-'DžYR 9hWT"f0ԥYUDmnYYcסFl}[0.VwPN@( r0N7s.0Z%0J[j'V1ۈe[:GMk:ƀ &:}N}|W0LH8`~N TsW Rl -%@=3I Qh沪VKΠPNg!D 3F`E)hxN f<B*LPP0%q ORtggMYrò8/tnm{ a-pnvj*cEé~9LA.,F7!A\z1.W =QwsQOO;OW'd薢3 5NCs+ JqPSjuhٌ-FD{M }tMNdZ+Wۖ;TχTkuUx!5$/T(Vl^!J(ܕ%*bĖǏi(ɽv)( Txp> W 9LAeH[" -_/ݥ  JJX+#xf\^RM#j[ҨK7HBĒfUMu#hΪNYE/djgW$:`Ie$~TyPqM ۲CB9j[5ɝ8iE.׿ W`u)vMf7d !K?pgc *43^yLCnnoVJ;Y@4}Ɉ9]\i$U{^ PGue C1jGAT֟R/ZLL%ü]?G\kv&C0#ٜolCMO!v^)i6LJ'GXDJbd9NzKiҐg9IHp,`,Y 3[l6ǧ 0dbY;O R(,H9Ohϣ"J) Aθ1)ΌAJiz"\m+KJPtS¥Krp,x468!0ad4IQ] 2&5mݐpQ^&F b4K!o}zJ̉a+Dقo.Ӏ0:`\j %?i2E.,NofCMҽ,^1YaW` SIv=ǝ#-TW5$SXӛ|36ɻ]\ֿg{(S>UwV\iN&۪9]o!Kűx;^+m0yM~oU޹qywMgQ6񇿽orRTr-{}!g>>mvѶ:+U$ x†xJ襐ZpraLEH#RۙeK;*bfY%0t{J-篦QwC` H*g%Ylqԏ1E~bn/Ko`[ݪ7-(5{&"gXiE* pqۊ φuikVBnnWDGI%Yډ8KsBP F7?TɈ%%G J;ʣ%$3;S@) ~z؁ [N ;]VzZ_}GO?14T◫Itpw;a'N:_S٢,\ޞ>VivZT͍L)Vjx _Zak7+wW#إa&Z\_~V-(ׯ)al+I,;=BN83B)?녡&7Lmz/]d8K"7e>D8E,U^ JB]д\x/@[sRr٘vhkYZA&6Ẳ:BN[0N>(Ih(/B>Z@=Z^'R oIwW患T0Ey aWp*S 0j.bELCRekkwSwÊ mS@'\*\cvFRê^-êK'V,ŃNwضY8EFW orT0X@aZ@!}H2d̘sI7d@85cYT'L[s *"&%@ӤKΘI"DiQ08ZPtq# ͉Z[tiv,#u9խJ t/!\X(kk&*Z(S jDV&<39EqqEZ(.֢J*&DHQp;/kNLr'Eʉ,Y#  Nlh-^ b1E\$Gx,$EM%Zٖy7L1am6a%r c+`P[H]x*w:LIr)sx[bb>6HZo61$5|gnrC U*C5|]gi(JlE ZA[y}i;1͇CFDØi-̮D($׏=, gJn9zwFq07 vnk>=_?{Wɍ/lyiOvb1OxږJT60}#U:T$3Ud(2/!wv}/Eh0ě.,/m@rI2&]ݰEǑs?YiO|Jn[7͊o4Γւٹ+v ^huI7m8$ILq)?խ2 0׬SyUff{7LII zvJnS}@6E1SStf'"jdUJ {+VC;B\?rq JmO3)GbpfU='D1)!aXMR{nwlMb\tWN:?At爦y2 d-,X7HֺX~뼏'1F}K d~허V".oڥ b ]rZrfI=*jAF5՗f}ޯe^ !PƔM#բ?}9}qLVl@e\&6R ߈1&|ǗߩX:<}w;xu4RgpWljBBABQ>!)v^*Iw +ėГ$`m Ⱥ6C֠0fO! rR=/P{ECt,.|I:jkgo}y^  Fo-6 o)<:R[jPCM#f.Hl8>y噪)e?A3r?-πz5͹^w>lNPWQ&)?P,Q;ъ9R6r+|.? -A9aGm$ho7ᦗ(ZIcԫJ[_wq6!7խoꉉ*qγaWdd B}uLh9vh9Xly7,ش0vBdC8DxAS_Cd ̗Gw/YJ\Sy/H}@31mDM={u'P+,~ZԥYD|!@nJ9\cxE7e[:vR xuEbkCV$q:O]?\BKpܐ44:dD+,$fWm0vDQs fgDRh7nij Rr34zAְ<ĻL_|/vvONe- Pp Tdޯ1݁sFհgj/ ={H~Yj`*;]| %ʢu ʇH &چ{!\k.)2lrEĤamƦmS?%ډU:݈V鴸T@i%^t:=]TIqM&ZNV9dpB#Վ(>LJNt"˯#U?ݼ5 x (RM] 2R5ČaW2ӫ&%Z򄇧l9 LM RnH:ϛsOHD`ڤ-FaWdݟ,.#%_sv?bM"!2%рձ1NF{af\#鿠{E)@ж;QK`L2Q,qJWi&8X*.:,_g .xݰ\BsҎåpuBLrٱ=>M+ޒ)/`O*F_*BM(Yگ *lȮOcت^>+zޛS>Q'6C=_;2#$<Ab#$ДV&2#:뀜󊭷BصhxMZPȵ]~Dfy{lSbl%'~vSR;9wdq&'\ytcmмmЬnn0:#Yv#f'㫟Z5cO(TPvnR~}LjQL"(H@x~n鉻ѳӨRbEIŵA￵?Eպk`6n{fNeF\}m2J*e]_DDyס0ysE'"ϛ~BQ:{s',o7qRѐ2tL @9ȏ3)Z0W8^3O vƨIq9iA̚e>)8Lͮe pv )"ny3"f1&T&қ;e*SO6BRq?QJ_-~ F/Ǹ:oK5R,ЬMy}UT6:,*Yʈ*}NwqF&&.8&Tǹ40Iт %+ HM4~q!0F>-Ζ_;0?8D%$ٴMak,C@- uDʁ 4*&@<>A1:P+4ь:0FAa_]ciyi d6Jh"PH\>$ ZʛT4=c.;~q<ņSI7V*8^ "cr* ATi& pI@HmS(&"h0Û%55.E@8©i 㖋ob4Ee &WFzn|Vę%x֠'  9"шBՐ-%ſi!EG`O =5h%iAkNw{qb$&f1 dCQofN$$P&2 GЎBK!a6Q9u ZWf(Q3*;~AJAq+ƳU:pŝO?~I p;g] 1&XLfʎď4y/WVxeǶ{yNIml||7zyJąe bS o͋lڎJʁ(XDْF:lk%ezQxJ#qI`IVr(45J#}Jhqxwg@_,o)C/qu3Fv-a KmXoKn6&2C#eRg4͜EN 5r. lntj)TsCs30M,,VѦ >@"X¡/F5wbޗeƃ- 'vD}kKU!wM Q=i|PP!§;A2&USSi6)3F(h ب 3۴8VW ]S*+*1]gW~ T@v0CE5W@ٓx(D EJxCe(Y)@"_'k RH)6z'D=C)zTKόBW縼IKuK̕8+M(k+N2Ei췣u!߀֢e1*k?ؔRY[pNH9‘Ӻv(42>Tm]-<)yԈy[Nqӄ%1n+Ih7n )CubCemnP9ݵ ~@(mB]S~K\@ Uq+Ꝣ(ws@EJ^pKk,jЮY {@?7@e63ZꋶOLYBdg`Uw\ĐT+mU bJA' T*A0XJUjk R%>nS;)Q^ζHӸ `x!FUK!X(1sR;Pb-EEVT6ԇyZ5(0!'JŐr@ٓ/rkҢ6h#ReՠҨr&hVF@V)Y3,vi1 YqJWaBOLA@9+tHpIz+d$EO AveU>E/&ӫ%m;ŀ=I4zz힍d:__ ~ɯaxRp)k؄CyWazǓה.|+^-r@ռ}> _XXOmj?aF-V|:*}[qe 7%pڴ 89 \֠jqJ`=HW9:%[ȌQŹ'LҹO[[&zOH! ,8#"80 i#8Uڄ޼Y흡na{Η)@9cRjI9CH*#hsCjBL`O(ѾY#:BK֌ᙳKk7#Y6W ScQ<5FBSA 5VwaB@I4OBGkczW)Np#uvU!u"lhjKo+(Nx+OIN)ݝxt-mnoj)d[>pi 3`k[ЀN %` *\95<:oH$5F\Rsi8H& ]w0hS]0vK aHB/{b%lUo;{U8xQRcd4z{fթ֔XTeT_l~ 2ISn@Q1T6XHVToV;`*k}GkZC14sc$?_EZ7Y1#HV1a`jD舁ӊpT3 \L a~'`6CnPlYY/`6fxC'ksQUږv iAVoRR9:le ɣ{*$AӲWP(3r[][J &jOP4 rǑNξ+3{Q v>6 ^P54 S}P'4&golS"IGy n$f -Դ3z6d᯳O]Q'md&F3sv[Mc/ ׂ kq>-b͚ir&1fM`0Ҝ')K(o̒#tJ>k>.tNO<)_%wK>nB{M9NPNp Bir'ܶ',AMLau˰\6,@$m++Rtls 1|=-:KٺBmSڣ[9Qvh})8]ϗŻ0_ ry+TO߬ '8 ӠeW7}\zUMt^c?6KצwxWT? \eTSpڊc]1-b.pmBU!J BY_hU2'LRY"teHny]1El)9+QxVğ86u0O""adO9-JuuIRŰ2J^?b,{:*:]C]U/Fioy.K 777)UQ,Ӭ[C+wܐA9a@p>9e.gd:{OG6ކ#n6&!yε2 r]&,O3JƝ.YjZ eHi3|cS l6-ze/6f:DXڒ |1ŘŽn <~xѽudU =6oETz&@K%Aފp{y|ي0&#˥H:GCf*HRTJi+ rݍ_5mYj׌@W]߇Ze!ѷޚPfpĻa{xNY 4A( ѩV]ڣ;C5)oʄۼY+&zmpQv|922LQc!9cIF'9X*dL9cbJ iB+_0-Um챎CUYV̗()n~Dn&S˟{3q` W*~ٺ``ΩP2Z oM$𒭭e w>z)>ʪl؀,SMSی8c Cƽmg-6ԥj;sVGx#R/awo!ZpM8(UoCXN9T/`N3) he؞nbU_~F3 EKOI~nD:Ʋp憠$Zj/sFSP|c6-̚r!&G9.ɓݮQa1p6ǀL P%+Iv=^ŶRx+3ŏZ(E5FCO$l#S[-ny޾CߝSI'V*439pʵ\نSنרl ûu@ u=9FE H08ǐ.A'coj mL uD$ٻ9n$W~`j FPfX`a@vpwK=Tuu3yIm`YEFjED YB{]i6!3JH@t: FIѭ i4;q+8!GR.b6iOB%0`'S65mﱬwEܲ ֥={&M" sg˰bέb,w+\eRӪ5ѺջϞEHeXs2,kr(g#(͵ƔEFb~+z)S(1>ddwes[ySbI3c|tJ~=K0eFQGb$8D0ܑvBF #du-W:Bqف@N(&bY!5,7R7WkMZ0M 2P [Ǵ:Aho5EʁPL2'th, B0Y6o=j+E 5_>ɆIǫQ;Dom otrDpu_Lmo}a=GFuV Y68RB\})wFDKʒWeMTQm0h_ Ѭ5޿ɻ<ߎ2źx[U?Wͷܼwkۋw*'pҿݻ>[_~J163=?߅G)[LbtEls d_uc}c=m ]KY7*BȻib;ѻbPb:ctn+Y8wԻ a!ODklJN,5=HE[T(]jIMDɇWwN"ïx-K152t:7wW1_cf:J` %0L3k񽯐#pRa1)#TM^KyrМYQI&guD$Hv9ghS9xh/gUkSȽulWc|Fܥ&1кN`Հ߯[Y՛teS!iW߾Ybv V ` Pֽ:41*iϥ)bQnnM"ZJz([4W3(e!PvW֌uB3Y=ځU?bnLKv@,.n7B- ˬid1UĪL2aγDtCJ>,(G.*Zf S`i҄0 #/򗥑5ެŸ/PEX? g`\d]=Ɠ >$ a ÙL8M-(z A Vlu2񘊡[$7Rџg.v+,){\D7cLMˉ^ N%B |=)`>3Ƃ!h];:?s7[-G/[Y#$7;7OOKUOő;S8N4u; bj>Ͼ稀p{8RA":+y9n(? wǎUgHܳ$k,"gY- jiX `Beׂ Å&$id`=auR4qװ}F{ֆ}QCs56iR`"YK!ZG @W1kI3#1_k`E!˰!9=( e8 i5:=kn. |q8khHX"ŠT;F^At%D:%S8M"rg R.r>%banrD:RRT'A(,lRc5j(k(@}ɤPQiZb_Kj,{]̂k~m>3ƒ!W|QͯE0h~]PNȧ ײ{۱Vtͯ]EZ_{ӓk4j~-ר N|IW}fA5Gaw>Rfax|#ݣ} 8|CGVA0cf>{aQE,rmcNGI([&WNkf[u/56+ޓ, xI"a&:@b>76MI>N%C}| gڕ)\Ocx{)15MӬɰ骞֣+߿.3c|.͕F4Wqi~󯯒Y9ix}_sw a#ZqAr*U>݄2\wfMlSul>E-l}ֳIIfۉ:z=1vCϠ-Ԫu;z7E(bZ<=)h9wu9Ž&Wl# S1% %䫭!Mߓ^5)u~~* '=' *d[okCgX2%m9V$6[ D=*O2*J<%CMeܦ3em:Stho,dj\ {lM9izkgU hg@5Ύ\ퟏon\1j\$Eg !f {mL?4⨐h>yrqoTI M-78o2_cW!yc4̈`8`,Z: wQ( WjhdEWaU3UeѰ'YmugŬA31aiHaҒ~Mixp=H3K[g>L!zWwZY+vXf\ڱ.p̀.@%$LnN g{/ /"Q.8D =1g;^?Rz?;,ו[twcݎעF3Ut՛fo8zw?eHKo֬3pB%PI/AUҺ뷟J&JZ$ys%\U*ifNX%غCitox`>^*X*jv|~ҢkP)5BPKl EP2'C0B? 3z~HF†Ӝl؋]UaV^U&Z^L2ZSR4yF;O;Y0I$zWZEc[|| ob̝#Wl8iq;O)Pe P i!Dv8[%mWHz Ұv\n(5x]gM|~>_u wmMnr*'K Ԍٗ@hq@j$JFo*L\D&{*a7'8+LL>97􃚕u0Vø"H؞M2qeOkW$͈MCPV)]PO<'Ǽݡ\qCʼ% F$#&#ꍍSO:f1o>;ٖy^b4c7uhfcT<9 Fr/a>WZ Chna|do2=7~V=_BN%Cj B% [0ŜN#cT (,2EMti%mZ&!VVDF"22VPRv`Yd\,:(ߨԶn$eHTȬ%+QZ%r ^~qU9@A+ut{ZBo9W::?aBwX쯤oNcXL1RC2vpn-~KP#l7[i=T)zgev޳/Zmލ1ٛ$ X#a[z>Q=) gukG\H{0/qكH AWa=@a";UJJ/ H{"2-ꁨաМagW'R^Yb*Hw%js{lsӷI2kS; |Vi]>?o_9z(6_h5wEcsbo}Ǭ]? 2BFS%PbJ0EL)&N{J1fzj'܂?o7@;#AՁnӿZ +g0u`ќ23X>̠G˧`+^Z|j~3jöpqF+K@F4% 2+v,a(JZ0W@ 3gj$?}%4ٷARXf\܏J$/ٿ`ٕ"{"SվU$5ILb#ʼywyLv٣Oĝ_}y=b5vHM,DcAeXMR*PaYl[PQ>JܧLmKC=8NQ8u>O9 `ıv=RmHcpZnSˆh΅ #."i:iAJu1G?aUsfix(YC*aF9pgnV/!Tw w XHIhUA~'IwK??(ص#-`~-Ub㸾MX)un͕ PRHp`PD"a'DQEq&4mr'HETpbA l 2 ojqm'=~M'4E[0d(TDҐP C qLV1sGL%00--T'B8vĊ:y`",ecH0@~ج4}tgP$NA*sӏ>LO% }"!xŗNV ,Ѥex=T1 weSU$Xu`6~iYŒW(E޸FɆysA li&昼64ϋ1rkm(@O!jcuIO  v>ɨ$?Xc[^K~fWck ا/vWtNfG^zUCx宬3 T(+Zp;khm(-arLzG x7wY]~Nۥysم f 5^B:wE5oB2fauǐ יeB#M`l|u:y]B:G7uDώBuO587\Q6N{/9V̮u/۹'YЅ<f Va/Z\xyߟQ`fx<E߹ 3ƒt~K;o7ҍX:ZC9ΤT:3AR=hnY, ܬs\v 1TZVeY ^3-Z`e6ԘH3l%ЈFʈhcZˠ[TAEff#<-34Of3Uʬifn$! ѩ~ҋ6in#%$ă/ީA5~Y+.(N.Ǭ a5-.,dV뢹erK&+IHN6ºN 5?$4VHi[5a]&YQn&e[i-К Uk (hHa rRGs<@rϛY#[x/w[.s.!ӛJΧgqFǖ.a 'OWcXF_qzҨ/l|Put枠[p=5#hNЎ"Nv nf} 7}ɓp[뇲KS~Tz즽G@VIB"ZA$Y)& 7f0Q$0NUCV5G795?;*p-*BBC#`(!WH``1^jL="%.q~+cJ(aR2?s6Vӽ:@V9WV_O6 āW_:@ 0眿)^Qk%.tX}>OrNJ) n93ؽL(m[z3 aCL^vQ"ۯ'ӚHcPKaTu:/1(^i?D =PC8$F-`?ÏeAʶ֏TݫlevM@bo.8Yғ%iz,g|TdD]X\HI Q jFjJB4 ֚5c&fu$MVnD^MmX ;E`! eNH66`}SmN۟mTT#62X. Ci2v_ ~T 2y (q$X10aN1aDF)FccJd+1T!rJ a['#cy7 _5`ݖnP8 *$5 -\Lu߳#1ū٣ '\*iu_m7lrJWROo_za2S"9 ޠ7/ W,;~=7B  OZaޭcoa MduL q5s斠åD!H02~z[/|y&DB&I'1dHoR=uÏBJb#M:XE!=ֽl<9xg**nQBPM/Z&!VVDF"22VP8aqP\ƀ8B\iccR #mbeVq4a$&CO‘.PpAJ+Rc/BQiLa+ȕCU]j p#c<mf!aJ8SrNpr c9"i8,sf-"a|!H+ stHM nA I+!R[d9] +w?o ]q%ohy? $!;o![V`0~UdLonk1(Oʮ;rU2{˕sL i%_A˓2KUi䘝H݂T果0y>uv==s">%YL_MNz`}+(GxaygdX^,oPș򳎦s;-t$8]Z 3%Vs]\JPkf7*-n qf9@Is!\L)tQdFajmTAÃ?5~䟳#pxc8Ee달9[0ʖ_ O&cgƫI '|7?]Y -v:Y>sU,^FEK5FGJnt܋$<M@dc ^ pKtL:h<ƧS'/?Y mFeΏb4&[W37MO)؊>j.l}=L9B!#zs[3!J͈b]sFWPpԊԼݥl'M첓KJ5,S.P$ ).$ 0ݿ"Feϝ(\ttVfuFU}*]}tx0N@՚TBm)<xTAN~Iw699D|`2N(Kv/7GMh9 Boڌ. & !\E|V/^To \r:\ɏz5/ѿ\=z{7?%Moá}f~Lqxg%A P̗Z*ͭ*B7L&[?M`Hn`MU"?qmi SJyG5t68҆k\4ϭ&iIhP@aW` 6.qG.F$aQs [fѹ# OQ6ek3ÈVgUD E㆏1nkK0Xplsqsg>U&1JH,q錦2tW 1Tjƙ7=X" >x3E݇  #/IL?\_\}2A??*PO/'#!dQEPx!PG:]ժ"CS) [|Q2a( Ğ1C.1f[j;T,*~|UNrOvHQb%DX+#N&)6.GK9]%2굆P%Qx1u<(Gfx3~=i ڻh!? f>Mi%x>_We-zK \_]JUڻpp(nӷ Cvq<*KӎzYI%zvw30S?occ? c:˜`~ufjRm&54b _/?]is`s>X V*+ѯT939_2=NMT%B:'qT=tJwRn=^kv~H~a+1m<'|shϫ 40. ;rz"W"krsU%PňtFɞ{A`PT#90٪Uj>\5(9C$F6F4b(qټ}i~j=gk=qcXВ8[bc"෤XIǴRTFsGb|:#u\ZXE#4$U%wϧ'RQOҠ^5YjG<w5OUCS~p' (@"ΓӤ#‡|vI1NZNY~":5pDƨ~Ro=̎uehYv] aܻ<@?\Z:BswG BRIAN<A0َq"cY$K0(,EDZ&Bk,)lZ%$.t&&g]*=LӓЭN^.m1AơoOu όz)X.%=<uVf9O\rk> ǟ#L;wO5Td=~S_XiV$Lay`X3Q`?ޚcWC㟼ˆ~_B'ޛy]I/'2}yW/.{Hormw!/kOzm&Sл˟Y--A|NOlBFt'BG|0NpHቔ&IE$Z"f%J%1c ' 0{AZ6Olz9/dey>?53F69\tmm61 9 T.i;!A_?L-y\Lmϣ\fF{P/qOz ʗߧ \CM82ng@uƝËBD.j[JfJ|Y&mvBV5p͊ڭɕ0nZJUcJp|'}UsCGlzQ5VS6"[N=)ru?-CT!N)کpJӝSVNE,do]F bc?ѡU;Ln:`!xG`&% AuGڨAv'3]zQ2FŸQ)j9&+W T2 zHt g`J!;d ԰l Q%e{GeԱ[+'1; ׇ]6Yŕsi 1m\46>Ph{еZ\ʲi@R(kHP0Ѥ1-M+v$IX<͘ iIB3 ujbjlVrYuM ;_ ڦ)E1JI)]h'cN4k͔Fֽߺ筶yO*8`Oyk.wa:`H`XnXk cT)1D e(ףuTm@CT=oP&ؖxz iZ3b=`.|nK*1P/QW!G/ތ_nO֭_3G3c9O U~׍^خ=v`?>AeǾ fpe5ds_x/$T3{^GO"l_ ~@^lsH)+&GD%W\Ϛ>ՈbԷ>[W$7׍2@ 88!z1G׃ $Pf4\aކ \s9CWD:+ww|3Kث7A*FHXPrԳcl(L߻(zW\vW~lV+ Iʪ$<4)6KÓB Q?~0˭@f&e'` '|>iCPUNXY~AI"ao,轭n 0(sIF ]9U1WhGQswzK9dZ(K.5LJ>j%qY-(6jA`pDc4Z{CF9oDZbbђ:IlkJEB\5E,'|4=﬙Gx6 "Q}} qjv|g9]G??a h6pYtq.+ ֞^dz>7>c 7w#?IJw& \3I^K\띺Jӗu";wF%?{6_w)~ȇ$f0&i++Y_5IIDYl5/24YU]]O]lC~p)q($GOAҠ ~E]~kZ儖nވn]p΢>L3&KħȃxOh2d:(,Lk<^BK~d'- |_g 3-(^&Bũ&i ˄x2 Cq, Ŷk3Wa8\sBR CJmwIҌ[3(SILkO_}~`ZŚX lʨ SA,k:'cn~cp%Wz8zc?%c(5KȤVl3jTL#Eii$IJXJEXTD`**Tþ) =oAcG6IX*a`uD"HO[mXe#(HiD$dT RqbyǍ]5 qc#SaED#`(p3Dd HiA6$u^+qKUzdGwtAM0j2OoN8K[s˘PZxB􄈤`-@ ե)˩A:^ H_[@@c#%>hۓ$RIuqsb^2&aJ %\DT4$lE3;s}ΞV_*zU=:gpBJsbI\Ȓ`-EBї`Ceh_Zvo9,&j5J &%0cc"aShbLd &͒H00wL*mhLdF}MaWiKRoP|DD>o9JddRk$;qK@$)LBǣxoUv=Gv"ByOgI,DJr:EXDLbHĩ&2RӄacIc,'8#1$&QŒiD2tIFS#2ׂ!!0DKp1߯ljvP3IT55~J&L.K? V5=22$*rpt3YEPkIqD@0iǀCɈ PO s^>M'c{"3s6ݤVpSHYu<3vNS_vH2cwa2ŅǏ'.\Ёu}RR+XB1HDI;˫`- -G4(lu7,ϳBE/g`Y ֪K(0iMZjhm%le#ߵL{1Ƽv%CȽ8.@!Pb&OҰnaX.U"$a &_˻5J! XLoƳLcFͰ<0:ޱvN=nj=DO7<`ʬ)p-%/~"{Q/y64*i?w#&ײq,e,*2$c(g _2EHIՏsZB`;+BNnO135|y>rpD~߳lݦ?X<םhYa{^M%&>GyۦUHbx35t"Ƈr:T4Aݳ#ʴK>DN7H-{7p'e9yHNM\,e\|?O)%% e`bbͽ<)q`TkJCrp:)A\2Az_2?gُ'BrؽLcC+ԙ&WL΃#B5V9mD0CƠ =K2mlߩe`0<@~}+UGBAVv$۠v `y!uf~^cE9D2LB&L1vua`f w79}Xk8::fzjo~}wWr+lxN #=-zؖP=7Çk˟3pZFvw2c/4L+ ):2ׯy&yyvUx϶mU:TIv 1nv:(X$oCy5ͼ"o~Y۹D~n;L̂wM b1tݡSh1Qpu=T KJܰ٭9oȺL\S$m]U>Wnt>B仐Aþgd8o}Y8#]^@lS$$@YF点%M3ihu{Zvc$D]Z/*r>I)l`1U%rv?MDyW-@ˏsթGj usiPuwNP T[Zؓ4k#$Uʐ[Gj1:ڃCB?zy)OOݻik9u2B^-Ph-pߕjGIȓ[76?mx>)U-eҵ`(ȹ""e٫ /y {YѾaXr)-ж% uH839FBn0ǖn! @%]+z=!0)TWڪWZkBpm6<Î *y4k3j! `4PFQPub^_6EW+tD|d-1^$jNEn.駈y=0"9M5exlk@;F&Y N4#[np1ɓ  (ViP=&۽m#N\8 BzZTqXpb4ޕ@$0 8pT"!{5UAXQ慁}O8U$ yC,"k/0wU|h-X>^[ςhXK )r|8#~&4v.֛"=05o6 ƘsJCN.ҽ4ٿT1}@BJzn{R:'vmS{ X PKqi݉pJ䨯 *xos9][\l>r)@ ;=:ԜLeF(Dy1ޕ(XXpdNZav˹szʇ^u ~nP/7!HU8^"GdkDTI"sjBL>075۷<9t"0 27&ǰr90E$CaZs.NR9;`G);M=.˕ "tׯtFLa R;4)^9ǐ'qeZ*7D|S O7Ȓo$>rj)`GȻ't;,.y3MB#81{H* l;%(a+NyM0`%׺#D+%vs5}Bn^"=~X$ar~ʃ\nBS+z 0b@JzaϚP;%D&9YFp 􄲨ͨݟz@4X(Wu:^Α*sev!3j=%vX/3W]VG;)JJږl_vBN,%9Jvo.¤w۔H9%8\SpM>o*U <F1LH푫 w$orFiv"{rq!w],ތɔO+*9S$P{◄b,_rǛ\h\+!T| @JNVOslD9N=fNjd~ꧫ_+Sx|"ϋ}$j;%SKYP8eRR 16CM2PÊoB1- S*$TH(FhWHE,F[̜T,(" Y7K(Jm&{ I\tZ0\|H=VCq14c7K\l" 3=c 2 KF9sL <0{ѫԅ֒k>wmb<;bYPm4rPVn<p ,1Ͼ!j&['ХThqQ `ljsD"]x$!TD ͞wfNA PS)  YA)UCʡy\hHj8FK4bVo\]1NYYĘ)#VH(bgg3JE! S0%.NmW^ %x# e[I 0"B1E{aa dT-:!bU'0o'/ssgȀO6@b HL8%m (ˠXxXY6\y wUA+{-BD!iC ar*'&(g{4Y v{2qrBJ?,`fƮ|Z5ԁ5A;f`!Ujcq0Hyis#@UxV,G{-ҥHUdl; x&Oއtoc?^Cb6KS^plݟ^G}@سc4fwn: ,.QR#t ]hgPRkng1]Pe %f7}3+q.Nu|jܬ{PknV%9[d8_>x~}j> nmBgս|:6(OLdјnuڏcEBsXbAL}3JR.sI̠pnYiLaMiqu?s"*u az}hkBV ~9X]co]|0 "dF.͡T֗.9u%DJzhI_ʓ\4*U)쌓Ҏ0+8$9S\2&;gvG9#VU?\])xS 2Ǽ$rAMc !a;n4A1߮9Q'Y MCC};͇!X87>7*ƱՌglffߵ+h4 bN[ԇd1߮8D1ok&of"J5cZ4; |g-c^l?|S-qFE{LjAIdp~w!2!- arq+ A2 x-iۥJ緖{69970r{x,IrF&7ku~`+|N_͚Ŗ#=ROתhԴ1IT+{T4"'V;+RVu qC*οnc/ Z˧;_*'bR޽Zs̰fd;*)ȸǓRKD[z/ET6\C1 3J/Og2+O^FٗGs,B<'-*œ0&'ֱiEV,N! #b`X8 nў& jݪvzL&pءFt"+02' bƴOfg0jN;v)`JɈߕx{+)An^tzwtZOF]B/ͿDW.\t?#a D+QՓz?[ ]$Si1o(|xʆB3(hmtf)(1׌gsjDUꄨhU N M/)BGi RF0;L':CQ1F#êεkLLƩ9qbY'˶lB0{fS?ӧ~8*j ef`?]ݳJru_bA'hX` nVo=h>B˳gqٯyxF.G]4W8k{}$F +\8*F=bm .B#YI,HansvmFBV6Sl-W[b+#7%Of 8x\zV^bÂ`P XO 7H3Q_Ǎ T,ŨmYn#̦xʼn^IoiA%RQ9!Kp;F`29n "Jv'D4J!_C3 U_$6CF`+KDĂ4+!ؒðQBbK('Ÿf&p-Y:<YSB&fLԚktR0j!vW(tahUAQLeR_[0"Eq պR jwH,!\$`XYdVRUkՆ& -EђTd551Jp b(J`  hRrX3)q%;Sa!OKf]i 3&-P/T'fDmwN &tr;ޏPxdA;OϦw6g2 ϓ?,l'7n0̃u{:YϪhv!Y|:ƒгr7G`g$(NaYS:YXg`tO0FNG Iϝ.ѳR~^Ql v!ѣ[rB,jAh%;4"cWRT1 +Ȳ's:eDQjۉ@vkjm(Q;$ RL|$5r2)Da ^jy)!T(ЩBx؟oю_,[&E7VtȟNn@s bY6LNS5 yfMi4O/QUz_Ulޜt0ѼGOK-s1$K#E}p1ҥ( {[XfT%ȸ]T"ebD#ߚU QGKz!x1\K߬m\YuպqYTĤR8&yM)ge1ӫ"m C4S auSB)heO=edNn&=!{4Dw[-U1Ivt%3k@hhC\=ݜjN koBhі g4!P>12K%8WhK@DX6" %WR(#a0T#+uDKIɀO@b#d` _ bK0Xw/pHk!O)8d׺ ,zMAT+q2;YgfS̶'qGW;TsI`Re1n'\Eh$PwdFc[yMmX#|1Ki奄ڡZ^D2|"QkљMNf4&'L5H>nܜjIqE5\cm ,%.|Y?pN%-$E \{YbyIZm4;jM@Hq#)^ZR&1)gBQ%ͩux"jjLQ *:1t>uw76HB8D0մZCOkT?ERe1nN̺%3k@hd#Fm,2NGY"Q;huk!O)Mo!HH9Ue Xqǥi/ VA*'9p"B%H&PM9o@B{k#J_K6$eQ<7VB8 * !)^l!ce_ bKz)Z$HB8D0%t=ɮuS2NGYEkd2֭uWw[<̿=6 5iaStܻIGtIL=OM5%GW6W{a~.JK({.>,?W.CG~B'yss8AGw~}݆tX1?G#4)-ʟ!gO;#;-?c>Op28){7߁6j$W igc1l\/'FЧhAkSy@=W9ÛK^Nng%Y {oQգx#7J*~4Z1~WNˮw0Ā h2u^s|AJ.OAwԝs5,iS1gיħ:kd@M2WFrOÇG'%[wLbu*rQRl@CU(3g ?F9_us˳ea6^-t{yoz4~g=;7"ajC4l?H*UɁI .!ihINg/};vTb<f$B+lR1NV4!&tK\LH$?ҩ SJ*LN!SHz1+:ΆOcAC"̋1;FqGTɏew͋jÙ]nwR8ӣhtk`im Hqҥ&Q_] #$8ɗ**L c߲3#}K x. i}~zʴ&nsnp3 ql?U;y2_wN\9_|~Q8/qvߤxѳxz8RȹJJ$PhEBᓄ]S6,V֕uG<@5JcaiկWR>ůԖ{$=\}k2[FƓk=u-2!p|+-h7 d*ZfɽCFi[؟Al侰9u1ɲ}Z=nL[dDdk f?sT1pSɕN{W„qB+X˵+(Nh(8gArEM9av1sbֽHA-p6iWNj;ʺD[rzO4'owx0x9Aoq{RTcGsQӊ9 !-T~^4WI300+ZT>TPU^  Q"uJH\eL`:✗/ 7(QA/K2/˼.R.~2-ub0h,10i~,(#y{H% K&# Lͧ3EJW֜{rµFgEFNJumQnvWk C뛯H)t @wdˇT",Use4JgDW{Y mh˓&K QE^uPmX3h[:hzZ/}VP՞yVszBN11v#0"_6G"(dXXdYbV7A1xyǍDyPPj&{bbF*SIŻ]57[Upm55 nccg퓷֬&sDz[ihA}lüϞ}?s-(@-AfaQDH҅QP-."+Zm l\տq}+:S- S-x `qe?ІE5qQ}h}' urTJPZtxz=m/&0/??ijb~RЂ ()Ek;i1D1#UpJ\~/BOFa6]?ؿNJZaEչI]٪@#/ڣ}r4g`e6pB,/ IHu} +{;KP߸9?6i%ä~)qoRR92sB5*h@(vCi.7"ǧ+̱+ ǧVNx K&HAD4D9NM ŔH]Jmt*JM IDbX0"1].\qrxQWN6botiknok%F5| %烔H5A:`NFXIdBHD'B2zOla+RTu;32-_8 ׎(}ZqQ*rsj)ȅ@F%f-%9#+<:VDh-J&wZssUj6>PDpo#}'G8n}jX K5c4e'u 9Fm b2 %p <<5hmTiEXpdaﮃjFJ20"M 1^Jp^S')M%} 8!t9L0 gwRkF@nrV(qqʃC0Q GʼnP%Uw$E AUVӧ2Tqd5+[PXa6TB4my,"p;gwwNP*m%#eq~w2EPѕUB\\ s VɗO\uNYQ/~?18q9-Gx?'? o"NY(M~}d:[|46}4ݻ9Q:$;Gv:BA}e8>p!4 .8|<ơ2YS{pL2(ҿ}~=悵hj`˝dr`r<$w0`LΟm xgc+Tk(@&'0 %d!oMcjø*y 5߀7XBMs.}+I D$R$H&:YθWj.{X7pV FA4CSƈDI9B THP&И#YHPJtvܢTU +OM^x A$+5ᰮ[P -LQ[} 4v6ŹESXSWbem#o$>PƬ@YS$.^ j;_XPmJΕ@֡{]9)qmic^blpkG'Bd FJjCIMa65nRXAԛտ('=s۠F#RW+pP4ԼWքu7IeMwL3YiLH j]S42@jt&\Dix{-ɏYk\lNcr\0,T~k,ʞo O4?{b~y>v4IO4gр- G*>y5NX>4i4,YM^1vH ʠ}8BH[_Op]5X\%<2UlzC$<؀P~E=(yx ~jԘ.fp곮|*58OoGӹ3[նX%&^rxs漑^*/?|L?#}5&_JgI켟g:D泻?3fg~lAu )q,"rho8CoG n+tߘɈdثX,BʐIBx}Ys y612?3W.oBÃ+$KXE ޒSVJap2׃ή887[e`j%Aˈ [pIWI`Fx/7&hFuЂ l2Ί9uYlč Q$Z=b?ZXޱ;c̼wFqp8STXqǫju7r\{1l[6*-‡i r=(zko8ƷGrn>כC;|_[ Ntwr'PrhNϯu/ W3Ms|6( TJ}:'#*亅/&*'qNKcCt2 'pڟ2;yD{q '-(}v TĘ#U?ƃWF]mÉ?ͽ#uroY>W ~"MD) J`zmtDV?7O (g<2f CɄ@8xHV4lKRPbS{n*'bf~8 +gn~ *7o~m]&Jޓ+(,H[Yfgh^fwgQP3[׻,zahG=l?sz|6a28dȲAJjVPCMN%򅆀_x J*Ƒw$Xvv:Qz6܀SZg| tQy0JgzzfI4U: %ٓҡ5x|;sMs(\d{WhTfܵZ]P)&J*Up!rzArpݚYRO-ƅ[#xdٻmlW|E@?..:ԋȌQ l'ȎdS(4qh|s 7\23IZ TX$?8P?G#'% qZ Cu;¿g;[ s(0ś( 4\9l$FM6LQH81%4[W`"p 3aam,| ލ{ъm/oZHrOGw`" r+?NXTV:K}" BEBQ:kEO ¹*cy2pcB9@>1HK{3yǐ!'׍ahL˩kpWky|%Ρ,z0z )km^?quqx;mjPx?ojZ00ce1ԤsJRbd:nvPqp+pjX8ŋ da2c'ޞ ) h?=6dKSE/$v a@ (Vk,5`{!a @FDW尺uTVwF}1Z߀FM2\[Wj FlwEVyErqluu\@7[i r.ŐBHRSiV[0%Rz}[dCx8\'Ehz:K Ѱz @{.GN PP_T 70$|H#PtHk*|Ͽ88Ӊm:~Srf^͌pqxϧM oL4Ojq=fכW_{,$zF:Ab2cxpx3eIqAiS1m2bLLI ź$!'| dR>'|Bȇi8$'>'OS0d߲'zr4BLK9;Pițo(A-zQ\kE(;2kENYJA4<R ! LC"AMek :\#;?HP d #$s4PmI%Th*sf y!}X65[:VnOvֱ&EpC|X𖝳݋uޓ匁wY\NowNr2 ^hXv@&ݦDԩrԖmMً3-gn?<ĿEߟէu]]g>kR=Xcg vtsL9C'7*%z~^7=䐸ej,2>A7N5LR&#}FBY.2Ol)Xrb3fej:\iw*4(u; E PrnQ;wǢDXc&Uxڮ`6Jj ҶW̮c!Xݨ_ Kqg )Qr$8*yhj0d"kLq#, Aa9r gB22\C9(ʸ6VC#v]ǎ1+iFqcDd[̴HFM<3*SH 1j6㶞6.~1vaD{ 7~^woon@Powu*!`6Q`&)(TVٓm3)) $u )RoFEc(it!SO6J[uW:5-Q[ǀp-ai)0;0Af&[4 "(fh8EnTr22Ԧ߲mĚbEtV%]!Ke݉pxbDwV& \bԸ6nV25$1aGpܢuVw|E#90Kn-?v HzNW&y#X|ʴm'4\9KA Ou6DZR7G3$W@{YEq~]/^'|!oWgNME9Pf'0= 3!f/ m0f-A \I37P O9ZSa bαυOUn8L?J&[g@nr+)S5^ fHqCilL;Z,̀\O.x tސy6঩SHZ*3(ҁsPFL)d4 EԩD 4m"OҾ\_@sK ɅP1neVR%J4 Ȩ8[]򷅻ߝY-zYrPRo 3)S=J9e)%24ж ?_fyVٖf83='3(27ˉZICr%˝Lk'!g^8[Mr&s;masĨ12D3IX7-0mnnD*˭?#SVZ 1H2hbH_E(>7D++5L0 ΌTR`gQ:v/quUMA͑ŃZ,=vE:o^|k0{e`,Y lٺ#Di@(h(+@mDG# }AoA鄱k[ Q v B5ha)1pZ+ 6s)_NAy*8 ~Ѻ0?K[1gMɗي3c&ֽnB lX r.5|e F vא-OPF[7 ҂晴 R(GN cY+]ĸnFm-Ĕi y4vE!Vt(9gNcJ23+jǒY#(*ys k 9AT真h{U;_EQX b&M/LTf( ɿ_o mGz&o"yY>[-?[72 \[7ocyks0i{v JHBF|WBrķRv{+X{Ք1*G4-@,Yq>Y~]ђσ2';O~f)x>f 5f |22et6{48<~hK>&iv "zù[tws˟!<<w rT*ߕDOo)уS pBsϟn?s w¬s*5I~a/eEi׳m.jn.j5CW\jFCiu5-o)k-UNÐ˼Vv[u.SGJ67amܨ]hidثYa<4_\F;Ћ`;Uo؈İo </2جt 3"ɽ/4"SiSᎫ5evnTuti}"=mYP^;h-i!7'm!K!N3Bo8P9Zy@?n琞c2` d ;kxh"As]^2&wG0rHV/YэC KSEDtW7֝Yݍ 7arΝ}L!}u~G!/T?<ܩ- yQ?rt/';oڨ6ż[Nܙu2]{t'Pv~gn]~ia%>w Kۢf~E*}A(>TBN襁n3bVq@' ޜrZdt3btSߊTpB=b[Xv5cXʉCBUi x<<ⶀ>o Nx'k8 jW-Z<{Fn{|ZSR s]Atp>ŪaJϐZq=76 4kX)loF ]# C(%`oJBs[I |cU ;;QpL 9eoi@Apܭ^U>GF(PkE "`sֱĴ&\=K&K]'SBmC%DΒ tr vj %/\t9ݩspе&&T*tYHF kyv>%#t=CmOȯ.6דQ1^\j\|~ϵ洂 No{#Mj BCvڜjސl׿1Ր*6:nN5E6~Oi kxdy䌂Q uΥ%^1cQ0!kȮEu@"eՀq!CTSM )8eB\NQ 3u$@HV3"GX:!\YAЪ{=\@9Ը] +* oJE VHXC `!B4`jK rB Жh/ 1) [Hp#`lT'Qw!p te1,i!BTh hvҋ,>GOŷiaཛ^_b2]׊ط2x;Ƌf.>cIybeyu 3cjI>ˁS2e˯ɏ{`t/)O͝l\N!g1%zPJgaQS| v!):_zA{n2hgWoNy[mhSB9D0hkU&kѻʠ trwIim¶[~wk!9RDt~Lw*re:Xǻo5ضw%6nm C4 Scލc+&}:mS;o@3h8|oջI;UȔA!gU[tjUeqW۵_zCUK =ܪ%guǬ5Cy1j dl1=-0I{1cv8u1?3`J|wZ*4s]ӡ<[׿3Ɣ=ܮ%P)zcƘ+5s 6=)w=f.=gcaһ3&d7so{̸!b ֬nNO//ü "b::^/~ oHʔX2&\i1U6H̐ht/чpa;<{soC([^(^w_u a@n[I=J^" dKk-!5HyؘdA"D7O 5(&8>.mZ4؀w+(~ ~DPQBayq~NއIB,^49㢜&ja~oӻrT-o>g٪i"oӝ;;I\I#?^$}"䌐!KAP_D$=$QQ=3$+=]!i(Br0ҵ%R=!GSf$,f8eQ&!6A%BLWxE}EZb-vH zOQTAE z#2E(UEBhm ۨbpJO"OD3)b"z+B,PC4sDx=l9,iKp@-!f"m[B\%)l0X2D:f0ǂ[Dk$M1§HlDZRDA"$UGͥ i\pi+ac-*j1VD4 _,T"FORԀMA4F9A1]B'0Ո,PHg"xŢ3Fj\Io~Ϻ9{WeXP`mPOZ]i ʕb|QUDiZ韷.6,+Ӄ⧰ڷ0$|(9ڜT0GY P~/^w%*=>dُjy?,cctXzs\́-絙k:YX43S.\<+}u[үl:]K],|>NOJ(cBs]1[,lYQTB//wA(ɣH(W2 eт"ːfip3Rb&PZQQU}1xeR[R^wFe#ăI㹥 2LZ*6Kʹ*jmB6Y%rԿ\9XXF3KB(mD"D( gq&\MMj[>&5*耿Xo)6Y," k|Uڶw*;f.A7W/լb eJ (s[ x15L2/Y,ݻ ңXbok 1{͘hm{JO"osFFHl ׿xO&l3ϿɏYmã jh[7T([@I'èTLkiGȷ4Ztb`p+ں.jhԣ! %P+#QUf 'X$6㽘U䗩}d>)\bCh]No&&ۍ<htN>"&R腔L¹L.Fn!K,hc ^RzYLūO{CҌ<IqvMƇgy`/VIf)OWhFI2)ۇxAuhǡqi&,nRJUʺrP-aJԍ9I.'._]-ޮ bP^c~D_p']_N|XmjDoGm݆XMd_kg-~XO˧G I (ALjXjKP:FOіaƍ#ig/yD`i!0ٌT,I2gU{~G%s /a˹V)Qvq|9eMa>'=NE^mTEl}|BNo?>nPeɯMWG#()Eś__AN37. HF)._7fv<ϛfz#.('D#UHN]\C4 QI';=o-5hV¤jupPs:rͯ'{94u-v=ܮe!/EE(z-n\V]'r_xxw=[!CXm iDհ´]X~G﷒­jd[a `D$VE;Ik#A)F֋~9ja:ET;o) g՝.%.uE=}\<v5Z 7ܲmyDV'/e);mMѥ w/I1ZQ~mOTn ,ҎD_7HR z25h0Y%, Rh)]@>ӽ4M92E $]us[ِ1 ~?Fec^f`FMM;B~մ#~rY.7ȱ ?v5ʬ@rH^,3&)9 ˵WnM)A_ S((qj,‚:9wCAI^nVJօX`t)g5<Γ޹??FecszJsދ-FMfHTp-.5c%j[~|;>'Iix{tvokjM'ͭ'0'wK3n֓J;>dGlunٶ?2շHXMsKGx ]BGL#@[7+*xcwD6!~(||̧؀[57L1LiyMSB\Wҿ[VrOc?|bvM=sӨv-s0g(!7=ߤӒ/C;QW$1`.pD)5A?5)C2Y)C2Y n@C#9N`9@1 $]1&AQSckEYח?Z4ȡ? -=?𢅞*fx-zz{VY=oMc(h=c趜Җ,ct6Kކ(&5Ӯ/ꤧ]_h$ Ou6#{6*&m{v: i8 ܨC.wEvԋ_LU@;௚DBf2UϪ_e==_Ҷp$urGdN0.i~7jtNґi__yC_NIe9s|,0׋4^^^/ߎJя囓fcIN>&7_~,;Gp9EY9?Oayy,R*˴&y/o0Ԣ;rVȻY~-v|Àn}N4ଘp BbɰFq9ގw>y=ؿr6IB4MJBo|˕no7p 7+p 7/}Kf E%YedI#F2\<2t.)Bƾeu-| Wʤj.2k%#hǣZԕ|qX%"l=NME9 ]9szT!kh F0`7`dx|掷Dk4Iu2R)1+;fewǬww=N10)\49qlɊO(Bn#KD5/qv}MaƟ{>Q? |T-ŦZ et.1gI'ɲldq;2yQCL4v0Oe>jhL<1USzVB948=px]o%]◴bgfDəD.4*k [幓"ƶ"4IjJ-)fWjK>rruUbE|),rպ&n|2)r/tϓOg: [A;E tpqvvqDjR.>-NP-.t˟",liՏcOxǣw~T]m7^K℣F묛$#D*'d*L|f1CXwdg5?+j~j Wv8V z#doɚl|w myr{g^=8p]N)jڻN/EΜyJV_)A%X([q~NDe@Mr,]]d 畠sGLz(){F\H4濚S*BiY߆ eÚS򮢷\NCDQV3Z}qftW׉_`}h1R޲,Ћ`Pg 8&arjMvY;6dgensj٭X :{`3oXrqr`$ dɺEFM*Ն3-SF t-,[[mly 6hi QQڕ\=w[ 3L}w~fZC ڃ(,絣$Toȍito[+GkĆizV"YKjӆ~f1|ŷ_|zBn4~te!q&̓ iǃ2}E{-W+Q4L}PՏW+` xkӪޣGVt+8&R9H_ؑ9؝9Olo|Qk`Xe~2 G7֚t,Xrm&b O˩EtWu"_|mPlOge+m@Pf2/.Wt>?[/9ۃ>,ݼoIע|s]ŖUwận5;Kgw'1Ptuu"KO*ے_C~;Dųz%íH0nl KēqyMvM޷,.6p-R8A>7wP9 njSS؛c%k&%j9P1GmDȥJ})Xb*S,JIp⛉2ΛJ&N@ٻƍ$WzƀPf=z{=bPG6%$nfX ꁯ(_VUY8b,`}U39WR 'Ŭ6 +bBvZs RrSz#L$Ydw0GXi]X cEN+OGCPQZA )[JulLRd%E2Uԅ~Ucj[h-=Y}bD)ѲoƓ،&*5uیx,C;O|:+B|iE/%y4GHbijNݻtրtMݹ]M4]}**!r3lmѫx%q448 S!l5DH݁hFJ']#")M⏧]#)mԊ_`HiQ歵fby>m\_Jx]RMܖ~"u2@fH U NL؂dʡ4 K3[2 NO–\.eO9EXEӄPPt.Iz@-x?ff4=g]m1d&!c8H1*5w[sNJ>-I>߬D',Y6:d6o/&2Șayro25P[m ҆SSB&e$dA^x7}!kJf;^3DC4yт)P׋6}$=D },%~B`51ЯUa5Xt4g+8h˜Fx8?zZ3;p) YyŞEJYe1 ̘<6DSJH* ζ!#`J{~Mn;>sigK>no 8;8HNX)Ίa+ܕf7A!ktC <;(U<<_w~`oW<d{?4͊XQG\B8їboX&E͗t$n4?v!ھbJ(寫[#0o=q}1NcZ_PQQ<խՒYUU;hVRv%;:ޮەgyNb]v}5c9Dz,:MU[)(r b47H##w߀|qׇqoxp <4:rٝ]V-Ug{fNNk.f+Mvgk{SW{MQaT|T2HXh:v{V k?{tT+NW9BumXf  LY3ԁ,%R #u.`MvY 6q|K/ꬢR ^?DR~-l/xr>FIyǫJs}:U[^?_?$7>cy\&2n81p `xF*5PL[<!Uqo룡yǶ qu L{ y|7[~K)|O+.hWQ lB֊wjA5ֈHQN_ ;{r^7M,uyݔy-$Xv`sdA<6!u4Qmʩt>ϥިl~(PB^-¨Y3hEAVPlz[WpԈ9y(dh rPLl1ͱtAPEQ4g9Do@ɍ ;0vF {cznr0%O`S؀ZcD4dK7Z#zћDG숖*ιH%,6)UJ [A '%0!o#k!H6M_7}t$]z(CcrφjHߓbqG5$hׯ\yp̷̌:? zʼnv ,nRz$sՄIzsDuM&[ *)$K/@B0}]XpPvƪ1g m/I|H\}P s1"H[K^b({$b\\GKO) ſXva[ZB_hTѠtglX)r*+\(A@i-3#,7_D?`K\|n&1S#֐wLv:o0TRX]Z ʔa\ ǔn̉P5L!/*G^=%2 cV ÖLc4~M9Y 3)JL"VNCݖ[`NG"f&̭]9GSCr J!RJ!"V! Era"w9ux>VS˞E 8Fm0M0#XVbDnDMߣ»0w2iTq6Z8p!! bsma2>D\z.G˦1 j |F/'_7cm1S(LGI~ۛQyjQ>цlg5##G\4Gِt5gذHD{{D:j/0/ 7&@`Hs̭ !mC7~ApuHZ n\ޘ^XP*A`|31Z~Ǘ?0nPMũ֎GmK?kBKGvV 9p:f[j\ug8F;&p!EI`2Nmx6 [pNE{!v;˕f䃫Ѷ9IB3Dtn3 F&ˍ3 wԽ8>yrwIx 4;h^v=VR ~ F,orr)|EOf5ĕD/tp>DCD3e"3v+Ʃхoᣟ1cޘ9/)_˩wa; U"2L9fl1^hSHҚ*9nag0^{IM /+Znz&,.;B *I\;t1l=%1{]nFU˭6[JVQ10;m9Œ)B9嶑)qc)5h<獴hr`ι qb]` !a1ES_& $61W`Ɨ+<ow(-.Xzi4r22+x 5*b`=z?_wfg;y;p8֪h_[ _YXb.2cAyUMTl~NҧJ}Hby'n\fğKhY未+s5wz i-I6)Mڣ6j|)5,VꢺԻ[34Vy R#X <{틖nN48Sk(ʋuUʅq/Xf67 H_ z2XFr]ms8+*}ۣ:߇xvJfo $6K$9l*7HJ6nt7\X1!g!,'VueI,j֋s$ ~YTXM)&*RX!(&24VL їwV23Kإ2YL̼aT=H~U5z1g;ΛQ 4M[JoYqBb!ތ<@\QgL5zupGU-N魱'7TPBVJ ~oUba:TO`P0鄬V}r"vSͧK(~YSȱȱȱqU".+PTg,H25"$d\ZU+r1ټ@.W.nG׋bP06N(#k/?Kj v(h()n O qLϱDž+ F;x;W>8ֹn zk/kqpMfEɎy]qA >=~{ps'0Tj!W*wpq$ppW妵k8Da'A1u7cZEcbGªC/gdmZ%6?ߗRކeKդ!pjj|5(Oa$llTVooϏN]/?l}s"lf~/ym<"af-aq,q*גpkw]HE"yӢc4BBZzc4_ sjE"_%*E'is&Xp➞UHy\нU.Ia]н >Tv?5n_Wùo0)>M.*90ǚ•@ K"%lj-n`]tgn)ZP/UE^b؊6f3|Ufww(ѣd8yg>OQ:/-`8vJr|{c/yNj.CDvݥmzIUe~MJyK`k\8|`gL2Z*zVY(l^Ko+2_D!Ωjo#I *n?` C@qM9a"j-WU򏏋A$x :(X; v4)e ڎp9 eXHRפ!f" JMi|3@VL[kִexq?>|]S/fyNj*F37"]EC :xZAN'r{0Os _z.=%A'sX6ݍ.XlmNH6os`ګ} ؏]~jZQX54Q2hL!5DIdL!R@J)1c}C_T7G1L8(?5\&VzϹW]@L-mChc }`͈X%qGq-,$$J3q,pDؾ(߂yFDJ+:31CUj99TP#)GHP *ԵDj?&83Vylrk 'tWlwcYHϮw5(PI~焔 ;=vj&r#\UNo;#E8ˆluIQFJM~yK# pJ+Z︂1 o&c l #̸2>@^|%(I O]Q\MEjK}i\IZ 9ːѠD lb>6tbb%(ci 1LU:mjغP_*9/\mn7wk3fr`5f"!ʛ_z\[|pJD3l6 qo1ͯZrn_P e]o #&J`D6䄉` zgpcJLpdwU&i;&&:|k~M` l #n=,[nMH  ܧ>UsKq`'ễb#:﨣ݎ6KVLifr{>ձUƺ(qkV_\ҟjJPk#Dd?Ǫ Zq:s9q-( D\P@0n2Zh ]f+o]JXΣPBp )fH&W4LiaUS@0Z%͈%VpIbK[0Z*6vJ6d) yyYIEs2 *$O@>H(%ߐz ]HTIuן?\+qX 1(cpXJh;yekn`%g5r.(:Gj:3Ņ{eř~5٠^Q^R*LL'D,4HP Yd<瞣<τb4= ouRcJS&ED U@D1\YBcJvKhEP=ϩIag1$MQEE|-#QEԩBY P1;#cv>|o} t9`'YFnk|=&Iz=忊lO:ES P>.4;4(Z$ E$1&IE 'Q* VNiIXb}./&>ʼ >C/:v$/m?g++$&}`X"q0'u:a=HJZe'.bN+@ˮ\,=@H'A)E p>r0T`ʊh>2r٬AR-YNJadvRI\a(ە)>)w_,ܕ uDr%.}k,ɻ? H^߆oʅ`MDs|ϟ6,pyl(]ΎLOp5fGS4S. kl@ڼpn?~Rs6{nm4YuXw֖|#4.o:H>Gr`+"ZP}d0jI0%¢Rwxwд?؟EF>` -3:Gxˠ746A'rN}e8`MBo?3|B/v`4j0a/sYZZR@iXezCpg[ M 8:m-Ԫ^???6հbmY/+kqtdF$1?F;,?0؅1.1vaq5Q&wV)@"T=7$,SLf4cD8IU! e gbEq X=}Š_l/ߐ)dT XJ{#R e{|zj؛X3ƈbcoܟyAc^ɚh'4TW s0*58ct;7oܼq[3D%݇p$ i L"bJ 0I,SmpIl,Ȝ {uV]r??^rQ>_ƑXve6s~^h MiX%`RM3&v$)paU"PFƂT\j `k:>%SnY@jðoc.!DT󔥌gHfYʙNRDk!!(:{%Rv̬j3HuFL(c)Tkci}J, D˳G0T"(C74s ]q%[)&uE c{M]kDŽ\7qO|zXY 9$yYw*<>vE~Z>6\='Ht~no*wccbݳ'oS]8 tWT?On'kܕ{;wіOl0 R@iטQ!\S:@DD 97~hckɊZ9gj{㸑_1MEd|8p`spY j+֋#dWIVk4#ӣ<#M7TXUSjo8BW!Ϋ}!@h. إ1hv MuDV(I3?A jǾiڲGqXG(@*R`78 -&sXa8FfHF'd Pe`safKlL7b;$_ f+RuL& 8us)f8r9>قGaFѤa׃{̀ZrS+^ XczJ ?ծ+ڂ݃q30^Q ێ>}>NWʂ@,?2dy"9ܾ}]=+Guݯ]~~9ܰ++;aj\>rQ% YO1 V[)1w6:I<>g p7nt 54X+)K6j얄z.9Zz_>eR*#G+RiRivXAH/jnjF*+c[W菡!ؠ6fY򥈢0 dS8u=A"Qa5l6udPA藧⣡[UqxO_⍖b^Ady $Q$RG/0^PCZ93: دV$0R0ʤ`EJ^d9܁{6I'SP bp0bƘhF/M {WQef-bv/>wNOP,(Y1m%+PBIBA)GM H&'{}{wX EfOh#N軣tZ"wϧ_ǧ6l ojgt=_ k'*y0BI5{40]-*эsj775Fݿ0ݎ=[F)!pwlNr{ &Y< u;l]- *q(gghnWS[IJ9ݝqדp4rۥnoWC6f9gFb'jiV/Hjѽcr[y(dҖ 3P/%೤:c"gVr@dirϼ΀Gc1H+߭# %&cC&Rڃ3uU4ۗ abC  ^L!z)2gRaByra]ɐ-:޳sHܚ_xHMhwvZn3R'FV Yy,Zo3dvmF1X/zhfūlXB*Z JaQ-W ̈Ю{5u5c=~.Og?Y }5Њ ypmpmgx+[d@cXf8^t<d;y>F]W UguGO|; oݶhCD޵깔-45.%%"fULElIbEosIʓ,T&XH^ߌ|U^!ǟ5`"DV遉4x1jYӣeP3bИZx䬓'd?ig: >Ptg{9}`dm|?- u_W9nsn7=;[}vo~< =7W_q?Ŀ9BH(,L.mhF흋p0WT,QĞ-;l;_pې\(r5.oRSlYq?{Q'!`'t`&:cnDp}}%橢~C0_ۖ/-t;ݘa M|Mgprjuٕp~Og=gDцgz d{Ytyv˧O R`SwQ?SXs9VO''c$hoWe4^ noX?eְwglav;0+>z{ 5?^o ,kC1towrv})(kp+d4OVwQ)k'pQns߇y'e//mj>L=a}}/F6rC_&C$7 j|\-F]~l)pEYj !,gR" ̖Oږ(cQj%/ot!nw^q7bxXSM(&w-[ŶvXKY+0Ma T X}GɦˑؖOW!-{kӍϫϫϫPw.GS"fpAK@)bT (S;U6.6K.:,Ms맱/i˟h[?m[X˦ :$mvwx]lQ(D9jRAW۬& .j !{%qec Y+7KEZց_^;h MXc\krᒒ\x9a]ɵӶSBa9Bkr!Vkv5Uy .5.&jn!NڇfXn,]}ZGOYº`ջճ~_|\upWBt*qO(}6 ya#51[ͼ n̢Q11V$^ !䖇nH<}, xҳ MyhX ;:8Q2>h8\vɃ['ƂG;zf¯ie{= _^ |ȅ%hjjj鼯}nѡʁ@Lb@Ƴr@-J7 Le}(n˟3DMQ-*Q (|=H4_r2;C),Af"Y#\Ƨ$Qx sEZHQHB~!7{,$}nlha$<)؅iA1XҬOgb|-G)0PB$ϻ#ѡPl[6D@_랉hV|rV1můKfq֙_X~.ʠ+rɈ λ7[;0fֺMpn'juD ؙPjy]9s ͉, An )bT9|T_߿߲6JF|Xbw|XOŅ@NLL+4^Lj`nA+r>Z {D1꼯}KmS4 Qmk2d (Y R϶h)4Sc_4b'+%+܄J}maM}aF"yUX4b2%sdC1I 80с ,]ҴjCs?oi|l/};k<1G80U݂ĢzZ5)B 6ƟX4vP5"WX槳_ .g!UnP41펳;[fӞ@YJiLbfH9+tNIվc{Om іT5Q]wNj|eJbFylDA\C3szjOz*&`še`NjlH9úhna.JviLEvtxSNZ9H0^'?ӵEVF]`U bCI)W Q7^殕 OӏbRM=ib4EbxxY"#F&0!EИtrr{m=uc9 S$^@G%1DN y2&x?'Y"qރPˎՙ‹a_ܽAE.wxܱn^5SMM:v:/][Z]MRc|$ݞ\X)͟>g'ކϟ槎.~:`2|0G=K*Hp3GƁt" D'dCFIe(8kX R}ӟ'}"jzVӖ1ހt:Eh80/u$eHc[y']TJ}bq>/:!TvnIF$)p7b(K){@ƲA!K .͑` `K_9_s:D!@HTZF ?-Xq$9MRWߖ<Vs5jlJ]&K7{Әv; #Xhx8M{[I5D FoQeBp܃1ʘ`C,Lz.E@)ͻlA;ּL ԫ`dM̍hedDn AbKP62Aqfجa4*](A=e\c tNtIFz\(Ckm5NnVĸ靐6pQL#m ]jBbiv3-L,- 79qtеY<Ig1~s&K9) s){ZgWRܿ % 1T~.RЖ1ކHz"5mXy 9>ڭ/ϟ% hF ;NQ\<쩴=)K^Y>hDū>\`\ ؉6&D~O].a9ȏ+J6. |p 1fpi1k= 6( Ɔk _E%)*VkiGPv4>"ځjkݘt˄32ؔqηŃkoϷNKŮ V/eH3TA!$dL?%E\.OR^* ,mV@k 5E.ozݵ 8: Q@Mx%IY@(ϬTwhX[1(ET֠a:zoC.XtR4R–J!݈Gu:heº^S&QTm;;y'P.cQB(5X0'NֿIm\f'z!;9IIU>㭴?U)OU\]ZodB>qvnfu 9ofzJkB:zlѸT&2 &J1I#IM^:'[x\?/VdH)MIH:MGdJCZf'h)> J!)SJz VI9J:iU Fx'W{ G}TքQf5C)=Z\7|ԽdhZ4ZKq%AQT̔ G:%j9)WRpRfg5i-^I <2NMfv'V ľ/ivק]fw}vK@%ƛd s<t!Ԧa΁({"0V|ʁ:++\p('9$$rosI~3P/n,o>,GC/[{1-( s;>XGMxv;}~7+a4Y'WNNR~;+"] ^ [D[7o'b\ub,9bV3 M+90hqG،'/E8a./صwYkD+I&$}"WZ󞚨AǶ鈥5eI _/7zr IŘ_Pϓ3lfh!ײ}zձCO=xv~ :Z??]zB[FJ¡Շg٪OdGUQ឵;pb-7)N-sN.}+s:wu ݨV '٠9^c_WJ+\<|GT5+?Pj1(^Q41I5^4!JFZ*6He BC߬Rǹ O;:#'->A(3Rn |8迯fّ.=MO2jn_^8ϼ.̿cAyfC~J!~K{NAR #e>OG]`c-lYs~)Ϫ-1޾&+Od\<\q)]򮣾tDZ֣ /\#;Ϫԣ$@y_ v0Ϳ=]_$'kUЯz&tZ4Jzl٧ٲOe-(Q c\7r=7gFmChB(Ñ)!%T(v&3x0}e=WusfX}{گI-|ՈI,FRc5q w\k1l \5\kd 5+\fL3B=@z*zb-?lH|o^B|R*faS ZJ7z.9CQhWbyK|,[uV廉ސZ8z +Ri B9{IbZ "6~ܧZ#bz6Oh)rppʗ@\5`;"Sx\e=W[񸖆gѸkݽ͟]ɨEfpŶqշ0ƈ0ب_nk XlhA8| Q[$X5 OiGg)vn殊9!^Y({gCUl(d41DoYlY i}@*tG`"dAn/~v 5?!jϨX:VW!XE!d%{gGIKy2UeˬǧshTHdӴӘJMP9;IDqc(}ܥ=~҇(m8[ܒW膝cTlL7ˌc E2ƻWoF,xʫT3zwC끪8z4bjw'G-Hw$*j5 ӎVX?{{N@>p7ƹ~fk!׬t-V<I3ʀ(qbh 2G;v^:kC%*}M#+ƽ &ɃHm U@\ 9z?匠D-+Sr`j|if>ZA`m`` cMqc@UnٷC'yJlaR֞n")Y!ڕy]ր:6E (xmgy}pc-[ Oi/]+MF V+ØmbmO0`]"au9p^wv.rВ^K+.dDTrEw`Agm#IgvڪtpoS̠֢)&'Ѽry|_;5r^q=Ԉ2ˬfR>cVco`e%C*KZUV~H xS'jWQ`#216XØO QlRtK3񮥆SR ioј=8  b dĎE{qnixj 4T C~-ͨqnV֏(b dĎEiEXaIhmoW{ 2/2Gǯ [۪,+),LPTd ox2\Y-2?I)z_l:SC7 pj( JgVVrXs${! ٬1ipK?/iJ{s!ej7Vm^}o^-l*= ␕VT@}-pCOż^1hܗZAi#[ohR м*ϯ/7iUooوfn37TBGPKk"EAl*.Bmfmf{2#y? ڌGl%b_+!=I) WPxN*WcKq{p 0bPt';>߰ Ndg1ڑSFrPT.W($0 VlH+M)3Ɩ!38q-&$ Y\/a Jjfc5O=.&.@1I<Ҵ邖:v|sQj%FӜQxoNz[jpxb'k*5JܪyRc敃 k)-Y2d"Fi1fe'#'cr#lA.\(w<Rr(5}>%{xv m:` Yc :W)4p sh&KehU,5>mcxh\w3Tux4mFϧhqҡx1!L)tzkh\CbM-O%7]*%)Bܜ˶4ڿW?4rzTדJ]/e8ݶ!̶M5Cao"IA qY\!xx5xEn1S\3b돎D/fu]dq/"[40iA;nFzu5" Anv T@nZ+fl15>R$dƦDߑ֐cSQ':YZՌ$MZ!2(M/. _tIX|C+MFlCQ4pbV_&e]_FQdQ{M ] 8yj)sHRC(j=4+5 R* _m~} R{PC.+>s@-| vψ`r3 $-J[R&hhH@MltOTi12cK|zr9wuE .EV2ys"HJ[ ؘ@M/P% lk*/E#U1%Y}}P<暗B$4;wt"):hQ,RM'$.3W͖4iJq4NN]X͆!ZY,jU?PhHꋿ21,̌#(?WQXvAR?mPZ"KyviΔf,_Ԯ"pYc`vgV=GFP7阯<*R- +X5)@=EôyKs[G@b  gp!A8PNq"k]BYGJd[DԫvMx&ɛF廟Ȧej#{XD}a6Ѫ$[RTeŕdcKpêp0P1J@ `KZtU%x،;]sZLͅ@ 'ai:zRRY~ιNryl8UT364_V)(Y޸~P9Mq<;9ccGT/CGMCVD򂒂FaQcɞ1vEt|Y@(Rze8ƈRZ%gUR ܖBZ*Qxg--C&q>W΀ MWb%)uDfU)5CmeHm ~ cݫ{ yR*L39R;>!W1cc>?DcQ* v\)؜0،L|hSd9Gd,0r-bc(%z T T/Z٣UNrDزR–r =clls0#LEzY6h W Oh٠=;u-Ӱkҹ00֋y]B~cDdEoI-Uߕ(x]bUm_(ʲ[_K\> 8WJd..'jyF/kWhȬ ]pK J]:@J0Sso; Sw-3Lʌ"$;<Є}€,r"nR#넯?<Jy$B}L9In9L;f/;RܷC3xE.kj()?^#?:m\6,MgYw1tWcdfkojY_sNͼ.ԒvS6)hJպ&J|sO'Jқk_n}0pqP=-iB%' }:!\Υ{q!r6B$YLj/Ix~.iWknKI{sC,O:(fq#M{KSy- F3;sK!cvXY!* #|3)Γ/lp2e3x^<088fX-20fJ` {E{,QQ;OՄ%!R $ A 0G*s`!%jq>shP\gFu8xExEtİb Ih%5#n; @I"Bs˝̘ o/V㯆M`?~ؐu:YV1r>}mhR}?=.3{6~  6]M{ij(@i=}NSߔY$SFc18$&0 H\;$9\oѷĘb/G-7VՆ-٨13֜Ix-FZ=_Ғ'phUK0|0}n$+_ _W1 y}g9X 4vkS]wy5?(XZ{hZgXF0-Fk˹:AdQp% YxWm0xaZ^$qiZ!;mT1]:(W%i)o-lXΎ*hP 'AGphAr9nBAYH5{]| Yd5[A a`' PuQjC4hWP6[ڟؖ7@1Wsoi_fs‰_8 Dp|[YA~@ST˪%M>'EbAkʗT DiQ^@<IYSw1 K)V Xb7*Bps$YRxXx\ g8smQ @], 6T bWm R T1 J9c8%f@Fg Fy$:y%I7kiv5^]N\;}1|?hKnFGpFGW0=$ bV,g20,`7[|=}Q,BJ4ʲ` Ԓ ɌCP?U|].uZ8]> Q 뻃xɘtz3#gvgc}o`kM<AgzיCw޿:6~tw~ug=GBC[oo:{wm}/LddԞ} zٍ.{\/Yuqg[Y- wl[֏`x_IHcBW~Ꝙy2- vmXfM8Y3=tKZRy{<< f|~ _Oe/f,!<*ݱfNaL=6^p`u{e˳N݆vX__se2l\J~qˆK-_LȘ\_q{}iyQ7)8o7ӷ:{|?Eng rgy/f8=\0\ɻ=9L^!w|SmOmbp;AO߲pbvV|7!F! Bݽq)|.xcN|P^.6<؞ =?7?k{?viO흝l~yl "8&g N gҖN OaP7ߗ uF'͟ppv_b7{4F@u74ocS9q3p7ϭ`60nI$q3$=u'q3Nf¡Q sxEnzҝrB[ԦIYuox(Rp9.ۭSV2ټւ mx&Ą' *p E" ҁHIb/P:FB$5(t8(@x?}ma@l /(PLF0(P.*1&DeE/-N)V 4w(* iĥ6\;=) ky\2Z>am.8ԋ~XF뇵;V ZΨRɳci%C:kD-kO H2>"qgdnc"uSBnr )!Rw>!kPZ$7^)is q" I봍>X LbEyٿ UxAgA'Έ"%3 Jn˘A5[6 d*d)NaX*u`b1"эţI(h7@ WT$ mp0Kv۹ ykdgq}񶳿f .0?B~͏ä`V >H-z)ohut \ ;Yw֜g&KN|[4ARA!}Qi Z-- R=t'O+t>3x%+m`}u8| NT%.=/,\1!5UVZRMu>ébՊ:6+G 02(5 \NKq1x@5s{KS/kfnqI(NQ`cd !ca2eϛWE2S #dZAH)ct)@gWG_-+C ?SM%'V@@W~: +VY} DƄV\V& QM2S\' S-ljӦ,Kȹ\ՑqTŬZeq S3).u?b+iHE GTꈢFp㈓38kYWᯞu2[hEPWʯ% .=ȟWvbLSA1sj.j}aWxkRҡU2ek/IP66 Bg٘iLDzb{3`K$P*~hRgj4 R4et0U4[Y,%R,%otٕ+Ĺ";9 W #uR zwἓ`ϱ: kB0bBsl]_e) yQ|,`hKRO3I'kôw;NL ^xO0Ep;*s&'ᲫP8\h)J#H#ptr =JJHk rc TEA)>[Uߒ直'Q*jZjb%r,f>+2kBJ d R[pp{.EbB&j(ʉJ `o-i̝R,Rl>gv!bJT:[T!0c9.+p՚zU8$pDE"."re4jiBFH m _n 1RuY@&  OSL8A -,vWQ)VwrYTE-tg)?x4 IJ1j!&Fp}[=j-WmS͢u~08pڀ'T X@?"JXftN&qwm43̣ f?.''w1ͷ$0'ܥj.YiA,$RA,$ ]g%$XE<欈7 9lƌ!^ s@J( ?# ]$zmCn ~L I9vRER )RDἺt9==]2jMN"go٢'?2Yt&[Ä쫥]}Z/}E$HJt]B.gl6P,6U48ʹ´ޟl0TR7֩7֑p55 `Xf略$XV{x7*ѶGޅ6scB69.tiq63>+KQ nQb],mlw)ɅhԻCj^Uˣ"ܼ~k@Ug}Vwnߟk}NF-:\+Z[vGz#]_. #|n8 I(.&W<]c_0nB`jRTu[Qӫh$5cVx,II8f>Q-Qm3##lUeBFnU|U`Q}j@#f8;@]dݝ ƌxC)2Bt3PtPPBp;ы\vN7QcqqhLٝ wKыtd_g;0(O[/6+Qۜ YKIue5~azW"׏늧X$Jk=Z.*ݎ:Vo.FaTyg~NLBk![\sٳe5Wï 1O)P2\`2(aֹvlIruwӴ]>AĽ-+~~:!tqsDX䗳cK}pg\lԌQW_;gl_< w)ReqY~yo/GE{||/zYn:pks~KAtxej{r~(O3"Sri?]n%軐ft Ŏ;[yfj.GIs8SBL^V}ھXFd\sE~uLOT:BAwo(|3;qw@ͧ䷳gr/ۻ{v~?^nMG}wtI-,M{Gyl4LeJqq Q(|zRN0 uDPc {c>;9y֧X7 ;|߾e:|گAkF( GT"eX}?+wɨ,4y~_j @+@xߺ):RtrbVWW5S|},<F(易JR6ƺX26dgR٥L^rjٍ}Lr!nFnFnم*ճM}E=ۂ?Jf#г ;A-,ɺJ*^ 'V{2J s:o/3F]䴊<:ϲIȍBU̇ lF>&y?ۍۍۻW<1RJ[3 cc>F D̓V9Q/+m[R4dCȎh ?Z뿽z/n:,MF[s;|Sg47z*f]*9`R܃.1*=whd\x ̏&E竅[Y0fk]pź ۓO|I'wk%rb}ҿ`ek)-:9)S.['n@$PjdLRRAK4IgbQQ)j.!C&w6IAg\#E(PY/@#uf!{/.?(\eT"fz?IbV(U% /Q9^#-KmT:@ʃZB H:iH}0w*:k@s`B/ Yk"^Q*0HU+iTڰdWNQ(DFSI )!3AGne(2|Ǜ˂,U@m:!v9F0y&ZKisD eGDŽH2X-S ƥ@c[ø"^Q֊Dn&-ƲD{?N&RRIb$Jj+Ep#`G"|zb 2!)Z'[cѩJ杮 pOg:+`OFn~αmSS|FنdO.]̃$Y i54u3 ׀I _,&!#xyp.&1Ƙ&%!9!foI{isjPBz0Rހ-Ndiڑ+$t4vnӐ$$xVd)K=I kII> 4-$/l;眊K V/I|=\`<@Rm_KP eZ Nг! aiQS4?#KEM <R` &HPض 08`Z >;%mF6I]֮8J*#20 J`GnB"HZfv:*,Kmb"ϋd=b²紥*X[&ۏG#{CM hg`hREpd45m$`)4"!;'FZD48<ÒּSq U4Dh1$]@ ;HFeS=$TmCwPb*nT@Q)jw@XuTdP>gLP J4H{Pi_ ջ&0@!jPW=r h4QuKc PP)2ˏ"X",k̀$H`F ֶpq@K?G \ >FD[&[ Pn7Vp):햜iOB*44YBy7Ɉ(JVЇI؍I R}T̠r]A*JTu9S0)0ՊH|8C."x7JI#corCm^v$|scpS`}1&[Z 7e[a Izn%9N,mne2(x3&@Y+s6S K8vg[7tq ᣊFZjzn%1h<MIXCLRgDA֜c h$fHIQ,)ҒR6^F-%֨m i$; b iD*<>D}A'#r۾1  rGʖCBn\ eV@UOPI6aySSLI%8 B\ր{H5ɱ5o{N7/$p1Թ䦩{%Хqa[\]- Zq=,lmʶG溿o?z{Oo?߇6|AjPWPBhEe)cBю挿=_׷|諥i."f>S!3 SIEz/fẝ7Jq ٖwafr=Yɲsj|!*Eܣ k7/)l}ncQUQsrO=o]}^܈˟3{}_6@7]yŇl <xoBm ]N/u8NHV` wJSR8C u`]Cyi O/?+9]1?q2| 8cLׂzW| RIy3o{}ɏwdIM}*nu{p֌]5K^IA$nzw9wacQ݃_Vaモujw;_Kb0`ThB]ѲbU*ueJ")QF3G}ϋ]#Kce܉^Lk No֣e3dy:6>{ ̂m~XC ?v P+F3 MRiIuGFxx U]-ye .KUV*$r-'#]T]SUbn$e&U\Wch\ׇˌ^f2ceFu?d(FUJP:CB` ΆTRU{URRyv]s'r]s'@<{Q;{bK$IwIRgY34fhú.)Gc_Z"W/éةuVP^KL{XTCCm Pɘnh5ZDt /RՍDkX[ܤyk=c9C- :`5 O9(>J)aKm+h1bpm>\ç\ZNRilMkg-Fd|>gL,~+0 l$#.~@,e#t-RZbG5H-HYHe,3D 'a}YMSL۞1a!3? VN!Q ̗>$,+sl~ȴR|Fy1gYsc/8>*!:1߁B u%ľ;9QR>ѯJ+W\_J֛/'2Fk;x81ctN`g9X `g qxPޞ-~`"ppMpo[/y:zs1|{y>~NWN{ҎoQڤDzqb\Z'zr /6ou0Yx?wry`"jr^H|h=L 34fh*̺B tWZ]KJ'+}M,>zTٺԕ.pNJgIJ'w.8o#2 JP|stVH QJѕyIT YF(Eiwr8jOX*B<W W:8Vʋ(ۂd@Sͩ(LRV2gjMYQkYs_yꀣ,UŅk,53oLo;ZWa}_J_bja8Z`ͺ>/-4mClJY+`TS|ZҴ:l>g pL#D"j!E@"L{XISMm(;&)TޠB޹u7((YkD%GԞy 0 W@\xx,wXRyGtH]u]R&4>? c[op4g!KI%7\[BG`FVLaJ 8/RQ1 W{Īv_6e?RNa_(5x:w #e Z5FW rp7#/N<ْ)/ONߗt5)ܨ(j iFӨk,Of '`#w՜UMDCJ3VtKxjq՗|#L)oWUk,;0tx FN߸6o˿)4nq'׋Ux ̓'/6g_'Cks+G^~[{*:y(+yb(^o?:j]GzplȞh:9[jCV$Pj BV:>{RJZkv >w+c$i)YME*} ᘒ1smsXqIQ3V♣ FfP^eh.ysYilךfMН 6!$!k:8ٖn;iu @\a|ƞ]:A6,/Ao6#W;X=^/~l=Eڮ[Ɏ^pt.shkIR3Pm.tdWmvu =Nۺ0/ԕdYUcggʘRk;rNqO?cT_%y_,6޶G$e$rJ$Sۯe8-qaWs gzHv_ b[i`_v0;ql8}YeIv$;C ]Zŏ,ej ϝ OOVX{t Ij yKU/WtKb[\T R(&J ,$L*| & !E@;r2!z-k P\@ɏ#jI? cq (%MWrƢPh[3aҀ+lbG>tO3)nI8k8,O:Z4rOPHknzӂߕ';5͌ƭ-' uY0IG5[f\ Z˻:y)zz5uQr=OdjQn(G>uOˉh4K~3i_S" ۴LnVX{5{ \{| P|dorG.1|2VG4g[ًo[#u yp&c>| *V+|tC(;)}ǧtL-u٣IgG)h*$HdvV$t^??>vq!Plv+Wi# Ы,{ӫ>m>̪OӇ[/c>80t K5ƹ/ gJHN)b}Phv̪+>'R{cNj,CՌZpc|P3%$IS9$N뜟l\W,%GN,"V*)$T OT1 ǸT[!Rjvڭ|t(ܭܭj+'~2RXzTh%kW*/*Cgi?OjBt%ʲ^sT߿Xohl ˉiNȄ6yWBT$DAcjՆ=וV/_-.&n3wm&Y]IXr٧ AV/ $͔jL@; A5PmKq/lb$sIe##% ujllyLqduVk 'c/+:: q:cS Bcwdw쥤*\-z5T.ח/ZbWh4W=uբ|wИD]"xwJB@ݑ+J%mXHa5O3 M'A>?D؏yF.G|n}XgzĽE4)cD)m8_Vc,9Xoz.wVLg̦ZU.?tg,o_~p]&1,n.y&9 pQN:h߫Yb!9;tP˜2_Zڦ?rq=>ubd3E yS9bxRT%LNsX=9r=4o%C~ɱ_1 n~v__ w XFP`ЩqvC7~v>Yt|`>[ _<4qޭڌG,:0܇ZfcX? 3$:]0w&QLkP&+EOU<<u4) f{H6eDmsޙ m՞8rCy;`{YxL Шzn)TN4Ȟaz߻2ϼv=OٝHzq鯾u1azomԸLYiNpnu=vi 0ACZ KdSGbH Iǯ3Ww/ƕ-zY6YBgZbB- 5deoȠ\CCFZ-@)@%ZQ{Q|RpzVb29 T2vX7$a,XEyv[vٝKuaϠ@j M˕v GAM'ujX' ~p:*H^ @V"Gukjn-Ҵ4omRV%atLN9* : l}b(Fq &FC1ZOBIEf $^'ba m2HchLJ1Cۈ 0$(hrI^%/ 3JYK`jS ؈!άNM&|.y]6 Q#4ߕ݇N"i~Ń'ـHAb"Ƣ( C8pSBMv|^D\ÓZձ@ @_ې, mK؅6˛'z2G vxF )w͔X7>l,PB#lo<(3tw ` {ݭ~UgϾxV- Gv>o)n9ϙB^ѺX-PFÀBZya!6%_uS~4ŷêq,{oHe(!lIa/6COx/ %kyw?-8/K6P}NjX;*2J*Y&GVzI:qa+g_i($llɅJC.h#Mo'\zqu:4h!I ofSh`4 ;v$P9 f"3c9gCPQ3r/ז9وݪHXgs%NH\3":wf5^(;hۆpn-Es1tu ]cYWpe-ISQ ,TKXwx<I.h;#Ii,ΞTSQo)^݁daJ,h yX\2'ĞP{j.nnwsޔ`Oo =v>ҙ`GV9Ĕ"<wc.6Ug; :f&v:ɼO~7񍗊/S{o.7חq7ƙBOw.]B)t,u]t|FhPz-ٻ[SnqsʏHlP6*wq#LV HÚ!EMEAg޾E^^}g .¹[D#O r>%?y|wsL|SɣECg|ҝWɱ Aʩ'$_紾Ber(sla[_.aߕ`.lƋlE3#Yy}uCb#R >-R,`!^yVB= CJY#T*Q?y~7 c pwnA!wlQϼj볣t?{VO;4"܉ZiN3_n;@?2j`|0bi|_Uu=E[6@fwaڇo#s*nR.~FԼ8}>Lfk|ہ|㴖In$rk=(p[]GRa4!)ѓk+#^R&a%CQ|J"Ҁ&o6!x,*s`n F˧9Fč19n-tAԹ7h !jo'@Dq+ axe_Ș`5Y +6w+X{aEEk c۲Nie*\:LnwGk7!lk݁]$5R#3PTO6vj@K*)(9{ e9]}now׿ؿ|:⣟;'4MeM斸Նfys?\\UJess/W2*A9JBDSmW.0PӼ+"e4zX~чT|D[6"1^kqP1`Un{qLcT g[híixvXkw*<_]gƇ#_j{e<ՖS"b{>\FP f Wdc\ͭd~]T*.Fxt2{];XJRwO}.{'n{lX)eijyTbjV1sJ߷S44(Wj=K3@={rnORc9/$-1NGMS4']J/eۛ~:zq4:?1;{2 bRg& 3Dqm1>\ Sn Ψ[l4|1io4#'n8sy p+Vƅj/;PV7w`U,s/T%46xL&>V CR$,? *H֠ h=7bmFޥZ'M=1)1|U`I)&*"yD *O@]o٣6Ϙc xm rxGǕu-JzHHoU(AѥZOʀ"iBj3}XNFj/D!xJ+^ɀ9nl"GJ14Q)ժÖC .eZS 8O3-Q:zI_ BoāG#>QFC&޲G-Eݬ9o\qsYYȸْ8m Nmfn_4z$`l*jEVYUZƜdHx*bg8[13JK]ƒן+eT Y(raè\~uS gt(KvieU/~Z%JQL*Oxt@Aj䨬KN^xʦI[|WAhoH?!bɩ`J&Mn36ăK)d)$]ȷ :>xc@A%sKfQ r2@0㴌?lp-/Cy4z@BP6aV}<") ;DŽK`BVpK5oAJ-TS&I#ģkE{}EgNsu, E7mXZp<(ה6C^tjeHp|vo-I1 *;sgrv:OG4G*2`GlRzrv+ oK"aDHhv)7i 2@$mg8!!]Bt$g:xt&뺴 <ƺԗVsMK),YrR1K(pK AA1P(!B3@EDSBf,e9!-%"!ڐQƙA"A"#Z}\DtQ='JW9-2op8"/J-7D1 $CY!H!ξy+9{"%z| :0cijs)vEBm%kNKN~^sZHcivOS>_I) ّe8ICOݡ)^mv[AkLYVjfdtb߹+]fP۔Š޿fB_GVZ$mU"F3'+8 ;'gpW3y+uhKJ8VqkA+seCLaLwcrl3IPDYkŬɬ"|k@u֑d#RYMdEInP<}a߆ʝU>^\5y=R6ҥsEJwbF>^\5)ZZẼ:'Qg2ȍNg+%GW}ljYp]IF\gޡ;6rᥤ81S!NγnZwFP_ J磉(ܿ~_iq|A,aKmkehk3_)χ%fYSnN7c 'm@Fn[+B%\g k )Zd?D? nPP v;Hs`f;3HW -1 !+k Z>[|˿_-,0kk9ㄬ g<\ܽ8S@?Goue:E#?Թ :OG ? >:f}p"I*l7ݤ{:Wދ{LLW DC|fr1;of~`/(|ay[ ފF2ՀbN{4IR{:%]ܕezu1W\IX۫983F@ hi M{Jq8MtT4~שiO@8}aIszaaVX?2;볆jVg'rO8k0ɸ&.[\lc;V6$3KOh:7YCKA:Hᬙ|8Oun)kg")C9= r!#/UFr~,,o_b|Q=Plc9>dr fccS^tkX8ff474OXh,i;k0{; a 0lλΥt>~k2 u=0ӹ6F<f?A,t ?^M Ǚ ,dfQE*:,+Y"C 4JWоS x. $& $*`X446&D2 y"He c3q&8N7z,\ ffE̓$G}sL(Q_":p%h fe*LjAyb.rO.X; b>#zkS*)ySz:,ގyTW-~yl0g "dy?Ȃ|Ӣ]|q` W"u}c4d:[~0[A oGWcT )Tvx:{7ܞ% H(gG(e>ֵg8G \%X`}'3_n}dF濨ӜbۓNaFɳ)lV83Sijn3mU &|'3g_zFZ51@Pq"4 9{c`ΙMǬ:#د^Z'MNJK V4ZNO, $F?#w2Ό]/ SkPigQ+ۣ#`ggQ<5W)?t2./.]ra^N͂FeDPNjkFLM|>(* `-,ޞqQǓџg8q3I燱NKQm -3d_F*hU7Ɵt4όK5n1T(>w.Yw2T<6xd5Th^SK$=[y 0zsEP[(|GvUC7cxj&!Zg=a=0+`eT,Noul3GK-^lƤfQ{@ Xt]Ţsz %9S.Bwi' @c`8"*Ԁr@4SFTHItf"&i"uNR10́Z,H4hOg;#Y=>̌paBW {q q0/ rޙj0N 4}{U+E5g(=~JZQ~jN\}3MQo[CB={.1!O©ewk n(ק9 AGL !Tmmb˪ѥD}F)JZ ]~jF)U~(ʢ,ZmTTD(=e Ip7xT*Ε种R ~(ť_ځR ~(-V:mRRxxDVP?TfiJAk|hJAZ9^z(TZmTWQz(/Q}~j:Pg{+}var/home/core/zuul-output/logs/kubelet.log0000644000000000000000003726423415145074014017711 0ustar rootrootFeb 17 13:45:09 crc systemd[1]: Starting Kubernetes Kubelet... Feb 17 13:45:09 crc restorecon[4709]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:09 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 13:45:10 crc restorecon[4709]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 13:45:10 crc restorecon[4709]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 17 13:45:10 crc kubenswrapper[4833]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 17 13:45:10 crc kubenswrapper[4833]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 17 13:45:10 crc kubenswrapper[4833]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 17 13:45:10 crc kubenswrapper[4833]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 17 13:45:10 crc kubenswrapper[4833]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 17 13:45:10 crc kubenswrapper[4833]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.798458 4833 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.808467 4833 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.808517 4833 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.808526 4833 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.808535 4833 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.808544 4833 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.808554 4833 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.808563 4833 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.808571 4833 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.808579 4833 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.808589 4833 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.808597 4833 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.808605 4833 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.808617 4833 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.808628 4833 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.808637 4833 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.808645 4833 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.808654 4833 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.808663 4833 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.808673 4833 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.808684 4833 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.808693 4833 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.808702 4833 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.808710 4833 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.808718 4833 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.808726 4833 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.808734 4833 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.808745 4833 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.808752 4833 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.808760 4833 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.808769 4833 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.808776 4833 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.808791 4833 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.808799 4833 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.808807 4833 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.808815 4833 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.808825 4833 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.808834 4833 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.808842 4833 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.808850 4833 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.808858 4833 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.808866 4833 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.808873 4833 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.808882 4833 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.808889 4833 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.808898 4833 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.808906 4833 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.808913 4833 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.808923 4833 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.808932 4833 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.808941 4833 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.808955 4833 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.808974 4833 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.808986 4833 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.808996 4833 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.809006 4833 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.809015 4833 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.809025 4833 feature_gate.go:330] unrecognized feature gate: Example Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.809068 4833 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.809079 4833 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.809088 4833 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.809097 4833 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.809106 4833 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.809116 4833 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.809128 4833 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.809138 4833 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.809149 4833 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.809157 4833 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.809165 4833 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.809173 4833 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.809181 4833 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.809189 4833 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.809381 4833 flags.go:64] FLAG: --address="0.0.0.0" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.809437 4833 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.809457 4833 flags.go:64] FLAG: --anonymous-auth="true" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.809469 4833 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.809481 4833 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.809490 4833 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.809503 4833 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.809514 4833 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.809524 4833 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.809533 4833 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.809543 4833 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.809555 4833 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.809565 4833 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.809574 4833 flags.go:64] FLAG: --cgroup-root="" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.809583 4833 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.809592 4833 flags.go:64] FLAG: --client-ca-file="" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.809601 4833 flags.go:64] FLAG: --cloud-config="" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.809609 4833 flags.go:64] FLAG: --cloud-provider="" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.809619 4833 flags.go:64] FLAG: --cluster-dns="[]" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.809629 4833 flags.go:64] FLAG: --cluster-domain="" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.809637 4833 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.809646 4833 flags.go:64] FLAG: --config-dir="" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.809655 4833 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.809665 4833 flags.go:64] FLAG: --container-log-max-files="5" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.809676 4833 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.809685 4833 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.809694 4833 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.809703 4833 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.809712 4833 flags.go:64] FLAG: --contention-profiling="false" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.809721 4833 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.809730 4833 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.809739 4833 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.809748 4833 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.809760 4833 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.809769 4833 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.809778 4833 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.809786 4833 flags.go:64] FLAG: --enable-load-reader="false" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.809796 4833 flags.go:64] FLAG: --enable-server="true" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.809805 4833 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.809817 4833 flags.go:64] FLAG: --event-burst="100" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.809826 4833 flags.go:64] FLAG: --event-qps="50" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.809835 4833 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.809844 4833 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.809853 4833 flags.go:64] FLAG: --eviction-hard="" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.809863 4833 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.809872 4833 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.809881 4833 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.809891 4833 flags.go:64] FLAG: --eviction-soft="" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.809900 4833 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.809909 4833 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.809918 4833 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.809926 4833 flags.go:64] FLAG: --experimental-mounter-path="" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.809935 4833 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.809944 4833 flags.go:64] FLAG: --fail-swap-on="true" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.809953 4833 flags.go:64] FLAG: --feature-gates="" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.809963 4833 flags.go:64] FLAG: --file-check-frequency="20s" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.809972 4833 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.809982 4833 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.809991 4833 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810000 4833 flags.go:64] FLAG: --healthz-port="10248" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810009 4833 flags.go:64] FLAG: --help="false" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810018 4833 flags.go:64] FLAG: --hostname-override="" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810028 4833 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810067 4833 flags.go:64] FLAG: --http-check-frequency="20s" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810077 4833 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810088 4833 flags.go:64] FLAG: --image-credential-provider-config="" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810096 4833 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810106 4833 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810114 4833 flags.go:64] FLAG: --image-service-endpoint="" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810124 4833 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810134 4833 flags.go:64] FLAG: --kube-api-burst="100" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810143 4833 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810152 4833 flags.go:64] FLAG: --kube-api-qps="50" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810161 4833 flags.go:64] FLAG: --kube-reserved="" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810170 4833 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810179 4833 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810188 4833 flags.go:64] FLAG: --kubelet-cgroups="" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810197 4833 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810206 4833 flags.go:64] FLAG: --lock-file="" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810214 4833 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810223 4833 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810232 4833 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810256 4833 flags.go:64] FLAG: --log-json-split-stream="false" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810267 4833 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810276 4833 flags.go:64] FLAG: --log-text-split-stream="false" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810285 4833 flags.go:64] FLAG: --logging-format="text" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810294 4833 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810304 4833 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810313 4833 flags.go:64] FLAG: --manifest-url="" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810322 4833 flags.go:64] FLAG: --manifest-url-header="" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810333 4833 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810343 4833 flags.go:64] FLAG: --max-open-files="1000000" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810354 4833 flags.go:64] FLAG: --max-pods="110" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810362 4833 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810372 4833 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810380 4833 flags.go:64] FLAG: --memory-manager-policy="None" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810390 4833 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810399 4833 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810408 4833 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810417 4833 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810437 4833 flags.go:64] FLAG: --node-status-max-images="50" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810446 4833 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810455 4833 flags.go:64] FLAG: --oom-score-adj="-999" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810465 4833 flags.go:64] FLAG: --pod-cidr="" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810473 4833 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810485 4833 flags.go:64] FLAG: --pod-manifest-path="" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810494 4833 flags.go:64] FLAG: --pod-max-pids="-1" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810503 4833 flags.go:64] FLAG: --pods-per-core="0" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810512 4833 flags.go:64] FLAG: --port="10250" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810521 4833 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810529 4833 flags.go:64] FLAG: --provider-id="" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810539 4833 flags.go:64] FLAG: --qos-reserved="" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810547 4833 flags.go:64] FLAG: --read-only-port="10255" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810556 4833 flags.go:64] FLAG: --register-node="true" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810565 4833 flags.go:64] FLAG: --register-schedulable="true" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810574 4833 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810588 4833 flags.go:64] FLAG: --registry-burst="10" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810597 4833 flags.go:64] FLAG: --registry-qps="5" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810606 4833 flags.go:64] FLAG: --reserved-cpus="" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810616 4833 flags.go:64] FLAG: --reserved-memory="" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810627 4833 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810637 4833 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810646 4833 flags.go:64] FLAG: --rotate-certificates="false" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810655 4833 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810664 4833 flags.go:64] FLAG: --runonce="false" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810672 4833 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810682 4833 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810691 4833 flags.go:64] FLAG: --seccomp-default="false" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810700 4833 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810709 4833 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810718 4833 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810727 4833 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810737 4833 flags.go:64] FLAG: --storage-driver-password="root" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810747 4833 flags.go:64] FLAG: --storage-driver-secure="false" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810756 4833 flags.go:64] FLAG: --storage-driver-table="stats" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810766 4833 flags.go:64] FLAG: --storage-driver-user="root" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810774 4833 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810784 4833 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810793 4833 flags.go:64] FLAG: --system-cgroups="" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810802 4833 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810815 4833 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810823 4833 flags.go:64] FLAG: --tls-cert-file="" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810832 4833 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810843 4833 flags.go:64] FLAG: --tls-min-version="" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810852 4833 flags.go:64] FLAG: --tls-private-key-file="" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810860 4833 flags.go:64] FLAG: --topology-manager-policy="none" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810869 4833 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810878 4833 flags.go:64] FLAG: --topology-manager-scope="container" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810887 4833 flags.go:64] FLAG: --v="2" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810899 4833 flags.go:64] FLAG: --version="false" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810910 4833 flags.go:64] FLAG: --vmodule="" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810921 4833 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.810930 4833 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.811156 4833 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.811167 4833 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.811176 4833 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.811185 4833 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.811193 4833 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.811201 4833 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.811210 4833 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.811218 4833 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.811225 4833 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.811233 4833 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.811241 4833 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.811249 4833 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.811256 4833 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.811267 4833 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.811278 4833 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.811287 4833 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.811295 4833 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.811303 4833 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.811314 4833 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.811322 4833 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.811331 4833 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.811339 4833 feature_gate.go:330] unrecognized feature gate: Example Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.811347 4833 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.811356 4833 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.811364 4833 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.811372 4833 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.811380 4833 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.811387 4833 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.811395 4833 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.811403 4833 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.811411 4833 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.811418 4833 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.811426 4833 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.811434 4833 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.811444 4833 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.811454 4833 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.811463 4833 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.811471 4833 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.811480 4833 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.811488 4833 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.811496 4833 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.811504 4833 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.811512 4833 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.811521 4833 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.811529 4833 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.811537 4833 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.811545 4833 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.811553 4833 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.811561 4833 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.811569 4833 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.811576 4833 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.811584 4833 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.811592 4833 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.811600 4833 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.811607 4833 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.811615 4833 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.811623 4833 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.811634 4833 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.811643 4833 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.811652 4833 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.811661 4833 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.811669 4833 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.811678 4833 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.811687 4833 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.811695 4833 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.811703 4833 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.811710 4833 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.811718 4833 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.811726 4833 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.811734 4833 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.811742 4833 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.811755 4833 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.824076 4833 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.824113 4833 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.824269 4833 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.824281 4833 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.824291 4833 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.824299 4833 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.824308 4833 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.824316 4833 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.824323 4833 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.824331 4833 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.824339 4833 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.824347 4833 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.824354 4833 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.824362 4833 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.824370 4833 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.824378 4833 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.824385 4833 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.824393 4833 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.824401 4833 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.824409 4833 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.824417 4833 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.824425 4833 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.824433 4833 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.824440 4833 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.824448 4833 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.824456 4833 feature_gate.go:330] unrecognized feature gate: Example Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.824464 4833 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.824471 4833 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.824479 4833 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.824487 4833 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.824495 4833 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.824503 4833 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.824511 4833 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.824520 4833 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.824528 4833 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.824536 4833 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.824544 4833 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.824552 4833 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.824560 4833 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.824568 4833 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.824575 4833 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.824583 4833 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.824591 4833 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.824599 4833 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.824609 4833 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.824618 4833 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.824627 4833 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.824635 4833 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.824643 4833 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.824650 4833 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.824658 4833 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.824666 4833 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.824674 4833 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.824681 4833 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.824692 4833 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.824702 4833 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.824711 4833 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.824719 4833 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.824727 4833 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.824735 4833 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.824743 4833 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.824750 4833 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.824761 4833 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.824770 4833 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.824778 4833 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.824788 4833 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.824798 4833 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.824810 4833 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.824818 4833 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.824826 4833 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.824834 4833 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.824841 4833 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.824849 4833 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.824861 4833 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.825107 4833 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.825120 4833 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.825129 4833 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.825139 4833 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.825147 4833 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.825155 4833 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.825163 4833 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.825171 4833 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.825179 4833 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.825187 4833 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.825195 4833 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.825202 4833 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.825210 4833 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.825218 4833 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.825225 4833 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.825233 4833 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.825241 4833 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.825248 4833 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.825257 4833 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.825265 4833 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.825273 4833 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.825281 4833 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.825288 4833 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.825297 4833 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.825305 4833 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.825313 4833 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.825321 4833 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.825331 4833 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.825341 4833 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.825353 4833 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.825362 4833 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.825371 4833 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.825379 4833 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.825388 4833 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.825396 4833 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.825405 4833 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.825413 4833 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.825421 4833 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.825429 4833 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.825437 4833 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.825445 4833 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.825453 4833 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.825461 4833 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.825468 4833 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.825476 4833 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.825485 4833 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.825492 4833 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.825500 4833 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.825507 4833 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.825515 4833 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.825524 4833 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.825532 4833 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.825540 4833 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.825549 4833 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.825560 4833 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.825570 4833 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.825578 4833 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.825585 4833 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.825593 4833 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.825601 4833 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.825611 4833 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.825620 4833 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.825628 4833 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.825636 4833 feature_gate.go:330] unrecognized feature gate: Example Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.825643 4833 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.825653 4833 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.825660 4833 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.825668 4833 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.825678 4833 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.825687 4833 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.825696 4833 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.825709 4833 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.826089 4833 server.go:940] "Client rotation is on, will bootstrap in background" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.832141 4833 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.832296 4833 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.834253 4833 server.go:997] "Starting client certificate rotation" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.834289 4833 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.835403 4833 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-24 22:46:35.56495733 +0000 UTC Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.835504 4833 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.858267 4833 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 17 13:45:10 crc kubenswrapper[4833]: E0217 13:45:10.861656 4833 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.243:6443: connect: connection refused" logger="UnhandledError" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.863321 4833 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.890937 4833 log.go:25] "Validated CRI v1 runtime API" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.929495 4833 log.go:25] "Validated CRI v1 image API" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.931371 4833 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.937659 4833 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-17-13-41-03-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.937697 4833 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.952487 4833 manager.go:217] Machine: {Timestamp:2026-02-17 13:45:10.949852904 +0000 UTC m=+0.584952347 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:648b67a2-27e7-447a-8ad2-7acc2e737df4 BootID:858a93ea-15f0-4bac-8fa3-badb79f68871 Filesystems:[{Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:15:f2:e5 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:15:f2:e5 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:d3:e6:df Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:b4:a3:63 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:ff:d3:fe Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:f5:27:2c Speed:-1 Mtu:1496} {Name:eth10 MacAddress:c2:24:dc:a5:24:5f Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:d6:b9:4f:5d:d2:da Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.952695 4833 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.952839 4833 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.953981 4833 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.954149 4833 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.954184 4833 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.954357 4833 topology_manager.go:138] "Creating topology manager with none policy" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.954366 4833 container_manager_linux.go:303] "Creating device plugin manager" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.954897 4833 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.954923 4833 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.955495 4833 state_mem.go:36] "Initialized new in-memory state store" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.955568 4833 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.959787 4833 kubelet.go:418] "Attempting to sync node with API server" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.959808 4833 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.959829 4833 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.959841 4833 kubelet.go:324] "Adding apiserver pod source" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.959852 4833 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.963674 4833 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.964628 4833 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.964596 4833 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.243:6443: connect: connection refused Feb 17 13:45:10 crc kubenswrapper[4833]: E0217 13:45:10.964708 4833 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.243:6443: connect: connection refused" logger="UnhandledError" Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.964595 4833 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.243:6443: connect: connection refused Feb 17 13:45:10 crc kubenswrapper[4833]: E0217 13:45:10.964772 4833 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.243:6443: connect: connection refused" logger="UnhandledError" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.968019 4833 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.969259 4833 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.969280 4833 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.969287 4833 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.969295 4833 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.969306 4833 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.969314 4833 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.969320 4833 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.969349 4833 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.969358 4833 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.969366 4833 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.969390 4833 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.969397 4833 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.970366 4833 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.970724 4833 server.go:1280] "Started kubelet" Feb 17 13:45:10 crc systemd[1]: Started Kubernetes Kubelet. Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.972488 4833 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.243:6443: connect: connection refused Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.972027 4833 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.972910 4833 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.974358 4833 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.975937 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.975981 4833 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.976739 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 02:47:57.530680124 +0000 UTC Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.978204 4833 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.978224 4833 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 17 13:45:10 crc kubenswrapper[4833]: E0217 13:45:10.983568 4833 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.984147 4833 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 17 13:45:10 crc kubenswrapper[4833]: W0217 13:45:10.984146 4833 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.243:6443: connect: connection refused Feb 17 13:45:10 crc kubenswrapper[4833]: E0217 13:45:10.984261 4833 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.243:6443: connect: connection refused" logger="UnhandledError" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.984534 4833 server.go:460] "Adding debug handlers to kubelet server" Feb 17 13:45:10 crc kubenswrapper[4833]: E0217 13:45:10.987568 4833 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.243:6443: connect: connection refused" interval="200ms" Feb 17 13:45:10 crc kubenswrapper[4833]: E0217 13:45:10.984789 4833 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.243:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18950ca0474d3946 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 13:45:10.97070215 +0000 UTC m=+0.605801583,LastTimestamp:2026-02-17 13:45:10.97070215 +0000 UTC m=+0.605801583,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.988024 4833 factory.go:55] Registering systemd factory Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.988307 4833 factory.go:221] Registration of the systemd container factory successfully Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.988927 4833 factory.go:153] Registering CRI-O factory Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.988952 4833 factory.go:221] Registration of the crio container factory successfully Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.989029 4833 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.989069 4833 factory.go:103] Registering Raw factory Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.989164 4833 manager.go:1196] Started watching for new ooms in manager Feb 17 13:45:10 crc kubenswrapper[4833]: I0217 13:45:10.989813 4833 manager.go:319] Starting recovery of all containers Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.004440 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.004512 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.004535 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.004557 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.004584 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.004601 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.004628 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.004655 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.004684 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.004701 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.004718 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.004744 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.004762 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.004783 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.004801 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.004855 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.004871 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.004888 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.004905 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.004978 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.005006 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.005027 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.005080 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.005129 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.005220 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.005246 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.005345 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.005376 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.005429 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.005448 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.005479 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.005536 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.005554 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.005573 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.005591 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.005608 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.005625 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.005641 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.005666 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.005731 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.005750 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.005767 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.005784 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.005802 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.005820 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.005968 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.005995 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.006167 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.006197 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.006264 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.006283 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.006308 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.006363 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.006383 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.006404 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.009280 4833 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.009339 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.009430 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.009451 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.009466 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.009532 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.009549 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.009567 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.009602 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.009617 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.009630 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.009645 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.009659 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.009728 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.009744 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.009779 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.009813 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.009825 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.009846 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.009860 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.009872 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.009950 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.009979 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.009993 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.010050 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.010095 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.010108 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.010121 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.010133 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.010145 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.010159 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.010173 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.010208 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.010221 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.010232 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.010246 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.010259 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.010272 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.010286 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.010297 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.010339 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.010352 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.010380 4833 manager.go:324] Recovery completed Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.010431 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.010479 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.010494 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.010507 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.010520 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.010534 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.010564 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.010578 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.010704 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.010725 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.010743 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.010782 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.010795 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.010808 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.010842 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.010887 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.010899 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.010912 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.010925 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.010937 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.010951 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.010963 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.011030 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.011062 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.011083 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.011098 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.011112 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.011142 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.011158 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.011171 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.011210 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.011223 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.011271 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.011285 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.011296 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.011309 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.011321 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.011332 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.011403 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.011423 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.011436 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.011450 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.011463 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.011497 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.011511 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.011524 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.011552 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.011592 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.011603 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.011615 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.011628 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.011639 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.011650 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.011662 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.011673 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.011686 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.011701 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.011714 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.011728 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.011743 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.011759 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.011774 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.011787 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.011799 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.011810 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.011823 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.011835 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.011847 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.011859 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.011870 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.011882 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.011895 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.011908 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.011921 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.011934 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.011947 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.011960 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.011974 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.011988 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.012001 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.012013 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.012026 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.012055 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.012068 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.012079 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.012092 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.012105 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.012116 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.012132 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.012144 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.012158 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.012175 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.012186 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.012198 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.012211 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.012221 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.012232 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.012246 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.012257 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.012269 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.012281 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.012292 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.012303 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.012315 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.012330 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.012341 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.012352 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.012363 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.012375 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.012386 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.012399 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.012410 4833 reconstruct.go:97] "Volume reconstruction finished" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.012418 4833 reconciler.go:26] "Reconciler: start to sync state" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.022233 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.023961 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.024024 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.024072 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.025031 4833 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.025286 4833 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.025313 4833 state_mem.go:36] "Initialized new in-memory state store" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.038410 4833 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.038926 4833 policy_none.go:49] "None policy: Start" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.040179 4833 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.040209 4833 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.040234 4833 state_mem.go:35] "Initializing new in-memory state store" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.040235 4833 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.040276 4833 kubelet.go:2335] "Starting kubelet main sync loop" Feb 17 13:45:11 crc kubenswrapper[4833]: E0217 13:45:11.040436 4833 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 17 13:45:11 crc kubenswrapper[4833]: W0217 13:45:11.041907 4833 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.243:6443: connect: connection refused Feb 17 13:45:11 crc kubenswrapper[4833]: E0217 13:45:11.041981 4833 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.243:6443: connect: connection refused" logger="UnhandledError" Feb 17 13:45:11 crc kubenswrapper[4833]: E0217 13:45:11.084678 4833 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.101159 4833 manager.go:334] "Starting Device Plugin manager" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.101207 4833 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.101221 4833 server.go:79] "Starting device plugin registration server" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.101636 4833 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.101653 4833 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.101811 4833 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.101956 4833 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.101969 4833 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 17 13:45:11 crc kubenswrapper[4833]: E0217 13:45:11.108500 4833 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.141501 4833 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.141594 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.143030 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.143080 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.143091 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.143200 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.143518 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.143580 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.143960 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.143988 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.143999 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.144172 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.144467 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.144552 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.144897 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.144920 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.144931 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.145199 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.145220 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.145231 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.145369 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.145509 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.145541 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.146269 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.146289 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.146289 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.146306 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.146322 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.146329 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.146299 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.146352 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.146332 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.146456 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.146481 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.146629 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.147219 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.147246 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.147256 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.147639 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.147660 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.147673 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.147823 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.147853 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.148552 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.148580 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.148592 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:11 crc kubenswrapper[4833]: E0217 13:45:11.188966 4833 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.243:6443: connect: connection refused" interval="400ms" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.202022 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.203237 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.203278 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.203291 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.203321 4833 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 13:45:11 crc kubenswrapper[4833]: E0217 13:45:11.203794 4833 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.243:6443: connect: connection refused" node="crc" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.215109 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.215187 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.215270 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.215320 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.215366 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.215415 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.215518 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.215563 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.215637 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.215723 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.215778 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.215818 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.215911 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.216017 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.216092 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.317207 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.317267 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.317292 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.317319 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.317344 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.317365 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.317385 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.317405 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.317429 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.317450 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.317474 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.317502 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.317529 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.317531 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.317567 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.317546 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.317581 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.317637 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.317610 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.317648 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.317655 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.317692 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.317610 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.317555 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.317776 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.317835 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.317707 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.317717 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.317700 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.317649 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.404403 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.405636 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.405669 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.405709 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.405733 4833 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 13:45:11 crc kubenswrapper[4833]: E0217 13:45:11.406243 4833 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.243:6443: connect: connection refused" node="crc" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.523931 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.546931 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.556294 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:45:11 crc kubenswrapper[4833]: W0217 13:45:11.569453 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-01ddd3ba90908b390ef6b1dfcbcb98d78b5ea988f4c7c51cf38eefda5e850b9e WatchSource:0}: Error finding container 01ddd3ba90908b390ef6b1dfcbcb98d78b5ea988f4c7c51cf38eefda5e850b9e: Status 404 returned error can't find the container with id 01ddd3ba90908b390ef6b1dfcbcb98d78b5ea988f4c7c51cf38eefda5e850b9e Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.579317 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 13:45:11 crc kubenswrapper[4833]: W0217 13:45:11.586103 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-8704e5b26cd92ee60d1a60f617be809b2ca9c8082ec82f78157f2c3f155da69a WatchSource:0}: Error finding container 8704e5b26cd92ee60d1a60f617be809b2ca9c8082ec82f78157f2c3f155da69a: Status 404 returned error can't find the container with id 8704e5b26cd92ee60d1a60f617be809b2ca9c8082ec82f78157f2c3f155da69a Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.587502 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 13:45:11 crc kubenswrapper[4833]: E0217 13:45:11.590280 4833 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.243:6443: connect: connection refused" interval="800ms" Feb 17 13:45:11 crc kubenswrapper[4833]: W0217 13:45:11.592422 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-be6c5ed3a58f1909431fbf0549cb6ab7a01767683da11a45f408fae66a2e16c1 WatchSource:0}: Error finding container be6c5ed3a58f1909431fbf0549cb6ab7a01767683da11a45f408fae66a2e16c1: Status 404 returned error can't find the container with id be6c5ed3a58f1909431fbf0549cb6ab7a01767683da11a45f408fae66a2e16c1 Feb 17 13:45:11 crc kubenswrapper[4833]: W0217 13:45:11.607464 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-638bf80ba77dfbaecff855b92eb62e4c8f313ab5f78d17352984c8c7f352a670 WatchSource:0}: Error finding container 638bf80ba77dfbaecff855b92eb62e4c8f313ab5f78d17352984c8c7f352a670: Status 404 returned error can't find the container with id 638bf80ba77dfbaecff855b92eb62e4c8f313ab5f78d17352984c8c7f352a670 Feb 17 13:45:11 crc kubenswrapper[4833]: W0217 13:45:11.613059 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-774c54b993cce7730afde85efb79a13bc05c96c02dac2cca54cf275c8d6a2dc1 WatchSource:0}: Error finding container 774c54b993cce7730afde85efb79a13bc05c96c02dac2cca54cf275c8d6a2dc1: Status 404 returned error can't find the container with id 774c54b993cce7730afde85efb79a13bc05c96c02dac2cca54cf275c8d6a2dc1 Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.806346 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.807879 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.807940 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.807965 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.808111 4833 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 13:45:11 crc kubenswrapper[4833]: E0217 13:45:11.808728 4833 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.243:6443: connect: connection refused" node="crc" Feb 17 13:45:11 crc kubenswrapper[4833]: W0217 13:45:11.857164 4833 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.243:6443: connect: connection refused Feb 17 13:45:11 crc kubenswrapper[4833]: E0217 13:45:11.857261 4833 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.243:6443: connect: connection refused" logger="UnhandledError" Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.973788 4833 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.243:6443: connect: connection refused Feb 17 13:45:11 crc kubenswrapper[4833]: I0217 13:45:11.977805 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 22:32:31.328907961 +0000 UTC Feb 17 13:45:12 crc kubenswrapper[4833]: I0217 13:45:12.044666 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"774c54b993cce7730afde85efb79a13bc05c96c02dac2cca54cf275c8d6a2dc1"} Feb 17 13:45:12 crc kubenswrapper[4833]: I0217 13:45:12.045862 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"638bf80ba77dfbaecff855b92eb62e4c8f313ab5f78d17352984c8c7f352a670"} Feb 17 13:45:12 crc kubenswrapper[4833]: I0217 13:45:12.046660 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"be6c5ed3a58f1909431fbf0549cb6ab7a01767683da11a45f408fae66a2e16c1"} Feb 17 13:45:12 crc kubenswrapper[4833]: I0217 13:45:12.047596 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8704e5b26cd92ee60d1a60f617be809b2ca9c8082ec82f78157f2c3f155da69a"} Feb 17 13:45:12 crc kubenswrapper[4833]: I0217 13:45:12.048767 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"01ddd3ba90908b390ef6b1dfcbcb98d78b5ea988f4c7c51cf38eefda5e850b9e"} Feb 17 13:45:12 crc kubenswrapper[4833]: W0217 13:45:12.178204 4833 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.243:6443: connect: connection refused Feb 17 13:45:12 crc kubenswrapper[4833]: E0217 13:45:12.178729 4833 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.243:6443: connect: connection refused" logger="UnhandledError" Feb 17 13:45:12 crc kubenswrapper[4833]: E0217 13:45:12.294365 4833 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.243:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18950ca0474d3946 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 13:45:10.97070215 +0000 UTC m=+0.605801583,LastTimestamp:2026-02-17 13:45:10.97070215 +0000 UTC m=+0.605801583,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 13:45:12 crc kubenswrapper[4833]: W0217 13:45:12.371399 4833 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.243:6443: connect: connection refused Feb 17 13:45:12 crc kubenswrapper[4833]: E0217 13:45:12.371517 4833 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.243:6443: connect: connection refused" logger="UnhandledError" Feb 17 13:45:12 crc kubenswrapper[4833]: W0217 13:45:12.373200 4833 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.243:6443: connect: connection refused Feb 17 13:45:12 crc kubenswrapper[4833]: E0217 13:45:12.373907 4833 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.243:6443: connect: connection refused" logger="UnhandledError" Feb 17 13:45:12 crc kubenswrapper[4833]: E0217 13:45:12.391101 4833 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.243:6443: connect: connection refused" interval="1.6s" Feb 17 13:45:12 crc kubenswrapper[4833]: I0217 13:45:12.609756 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:45:12 crc kubenswrapper[4833]: I0217 13:45:12.611161 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:12 crc kubenswrapper[4833]: I0217 13:45:12.611212 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:12 crc kubenswrapper[4833]: I0217 13:45:12.611223 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:12 crc kubenswrapper[4833]: I0217 13:45:12.611250 4833 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 13:45:12 crc kubenswrapper[4833]: E0217 13:45:12.611730 4833 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.243:6443: connect: connection refused" node="crc" Feb 17 13:45:12 crc kubenswrapper[4833]: I0217 13:45:12.946352 4833 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 17 13:45:12 crc kubenswrapper[4833]: E0217 13:45:12.948672 4833 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.243:6443: connect: connection refused" logger="UnhandledError" Feb 17 13:45:12 crc kubenswrapper[4833]: I0217 13:45:12.974112 4833 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.243:6443: connect: connection refused Feb 17 13:45:12 crc kubenswrapper[4833]: I0217 13:45:12.978133 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 08:06:49.415416905 +0000 UTC Feb 17 13:45:13 crc kubenswrapper[4833]: I0217 13:45:13.053468 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2260fe18b8f9f829f7dde85c9bdfc32895c8bfe48e0986c8f8205708e078ff37"} Feb 17 13:45:13 crc kubenswrapper[4833]: I0217 13:45:13.053518 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e477bf1f25d554946b3657a5bd18f922b18483902fd4f9c05a452a299cb4d920"} Feb 17 13:45:13 crc kubenswrapper[4833]: I0217 13:45:13.053531 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0e36c4174a69c22c608db515a3fe558ec0292c53ae56a43a945c8bb19bf4cdf2"} Feb 17 13:45:13 crc kubenswrapper[4833]: I0217 13:45:13.053540 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0bb5bc7e6f182dd6271a6d16d8441e1f6833a32fd03858a3b91d16a896339c2b"} Feb 17 13:45:13 crc kubenswrapper[4833]: I0217 13:45:13.053542 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:45:13 crc kubenswrapper[4833]: I0217 13:45:13.054468 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:13 crc kubenswrapper[4833]: I0217 13:45:13.054507 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:13 crc kubenswrapper[4833]: I0217 13:45:13.054519 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:13 crc kubenswrapper[4833]: I0217 13:45:13.054927 4833 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272" exitCode=0 Feb 17 13:45:13 crc kubenswrapper[4833]: I0217 13:45:13.054961 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272"} Feb 17 13:45:13 crc kubenswrapper[4833]: I0217 13:45:13.055095 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:45:13 crc kubenswrapper[4833]: I0217 13:45:13.055955 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:13 crc kubenswrapper[4833]: I0217 13:45:13.055984 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:13 crc kubenswrapper[4833]: I0217 13:45:13.055994 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:13 crc kubenswrapper[4833]: I0217 13:45:13.056553 4833 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb" exitCode=0 Feb 17 13:45:13 crc kubenswrapper[4833]: I0217 13:45:13.056601 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb"} Feb 17 13:45:13 crc kubenswrapper[4833]: I0217 13:45:13.056661 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:45:13 crc kubenswrapper[4833]: I0217 13:45:13.057096 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:45:13 crc kubenswrapper[4833]: I0217 13:45:13.057380 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:13 crc kubenswrapper[4833]: I0217 13:45:13.057418 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:13 crc kubenswrapper[4833]: I0217 13:45:13.057432 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:13 crc kubenswrapper[4833]: I0217 13:45:13.057688 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:13 crc kubenswrapper[4833]: I0217 13:45:13.057715 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:13 crc kubenswrapper[4833]: I0217 13:45:13.057725 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:13 crc kubenswrapper[4833]: I0217 13:45:13.058163 4833 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="c0175a0e173dbfea894445c85d21d5bcd0d42cb4cfc8817d41ad2c466b61b96d" exitCode=0 Feb 17 13:45:13 crc kubenswrapper[4833]: I0217 13:45:13.058226 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:45:13 crc kubenswrapper[4833]: I0217 13:45:13.058246 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"c0175a0e173dbfea894445c85d21d5bcd0d42cb4cfc8817d41ad2c466b61b96d"} Feb 17 13:45:13 crc kubenswrapper[4833]: I0217 13:45:13.058942 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:13 crc kubenswrapper[4833]: I0217 13:45:13.058984 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:13 crc kubenswrapper[4833]: I0217 13:45:13.058993 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:13 crc kubenswrapper[4833]: I0217 13:45:13.059551 4833 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="e89d83530ec38b869e22ff9125ce8e680c2a274e471aedd447c9daee233acb1d" exitCode=0 Feb 17 13:45:13 crc kubenswrapper[4833]: I0217 13:45:13.059587 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"e89d83530ec38b869e22ff9125ce8e680c2a274e471aedd447c9daee233acb1d"} Feb 17 13:45:13 crc kubenswrapper[4833]: I0217 13:45:13.059625 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:45:13 crc kubenswrapper[4833]: I0217 13:45:13.060278 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:13 crc kubenswrapper[4833]: I0217 13:45:13.060301 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:13 crc kubenswrapper[4833]: I0217 13:45:13.060311 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:13 crc kubenswrapper[4833]: I0217 13:45:13.973840 4833 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.243:6443: connect: connection refused Feb 17 13:45:13 crc kubenswrapper[4833]: I0217 13:45:13.978416 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 22:06:42.515475405 +0000 UTC Feb 17 13:45:13 crc kubenswrapper[4833]: E0217 13:45:13.992347 4833 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.243:6443: connect: connection refused" interval="3.2s" Feb 17 13:45:14 crc kubenswrapper[4833]: I0217 13:45:14.065555 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"5529e4fe84d20c564ff55b47e32df67ca6aac40d2629a6b5bf96e03a64b79676"} Feb 17 13:45:14 crc kubenswrapper[4833]: I0217 13:45:14.065598 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"9702bc05160617e0c8ac8fd3d9a81244b5be2bf955ef58f9408fc0b42bea6609"} Feb 17 13:45:14 crc kubenswrapper[4833]: I0217 13:45:14.065610 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"ae2cc390253214c6979cba64584e59e0342c1f750b16c569acfd885cb6b36c31"} Feb 17 13:45:14 crc kubenswrapper[4833]: I0217 13:45:14.065693 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:45:14 crc kubenswrapper[4833]: I0217 13:45:14.066593 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:14 crc kubenswrapper[4833]: I0217 13:45:14.066642 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:14 crc kubenswrapper[4833]: I0217 13:45:14.066658 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:14 crc kubenswrapper[4833]: I0217 13:45:14.068850 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9f0450fd6122e12877e07084f693f67e63ac70bcecc962f7da5a32241ba14553"} Feb 17 13:45:14 crc kubenswrapper[4833]: I0217 13:45:14.068901 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"84373535efe637bbd5cf8649523941f0e73ad51f06dd45c913741e2a8e7b775f"} Feb 17 13:45:14 crc kubenswrapper[4833]: I0217 13:45:14.068924 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"090ad37f43c1a891b1a2922b2a367e1b52ce56fd6a53e9e749f819bf281136a2"} Feb 17 13:45:14 crc kubenswrapper[4833]: I0217 13:45:14.068940 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8f18bb5a144ef41de921b7b92ee560b76e3f9f9a626be4d5638db46e4688d256"} Feb 17 13:45:14 crc kubenswrapper[4833]: I0217 13:45:14.070087 4833 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b" exitCode=0 Feb 17 13:45:14 crc kubenswrapper[4833]: I0217 13:45:14.070164 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b"} Feb 17 13:45:14 crc kubenswrapper[4833]: I0217 13:45:14.070182 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:45:14 crc kubenswrapper[4833]: I0217 13:45:14.070676 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:14 crc kubenswrapper[4833]: I0217 13:45:14.070705 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:14 crc kubenswrapper[4833]: I0217 13:45:14.070715 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:14 crc kubenswrapper[4833]: I0217 13:45:14.073004 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:45:14 crc kubenswrapper[4833]: I0217 13:45:14.072972 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"08922642a7e0cf3749659b43cea0865de8509aa8c17e6137406830f1c897d6e5"} Feb 17 13:45:14 crc kubenswrapper[4833]: I0217 13:45:14.073021 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:45:14 crc kubenswrapper[4833]: I0217 13:45:14.073785 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:14 crc kubenswrapper[4833]: I0217 13:45:14.073810 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:14 crc kubenswrapper[4833]: I0217 13:45:14.073820 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:14 crc kubenswrapper[4833]: I0217 13:45:14.074529 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:14 crc kubenswrapper[4833]: I0217 13:45:14.074552 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:14 crc kubenswrapper[4833]: I0217 13:45:14.074561 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:14 crc kubenswrapper[4833]: I0217 13:45:14.212648 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:45:14 crc kubenswrapper[4833]: I0217 13:45:14.213783 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:14 crc kubenswrapper[4833]: I0217 13:45:14.213809 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:14 crc kubenswrapper[4833]: I0217 13:45:14.213819 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:14 crc kubenswrapper[4833]: I0217 13:45:14.213838 4833 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 13:45:14 crc kubenswrapper[4833]: E0217 13:45:14.214162 4833 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.243:6443: connect: connection refused" node="crc" Feb 17 13:45:14 crc kubenswrapper[4833]: W0217 13:45:14.281069 4833 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.243:6443: connect: connection refused Feb 17 13:45:14 crc kubenswrapper[4833]: E0217 13:45:14.281178 4833 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.243:6443: connect: connection refused" logger="UnhandledError" Feb 17 13:45:14 crc kubenswrapper[4833]: W0217 13:45:14.641393 4833 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.243:6443: connect: connection refused Feb 17 13:45:14 crc kubenswrapper[4833]: E0217 13:45:14.641974 4833 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.243:6443: connect: connection refused" logger="UnhandledError" Feb 17 13:45:14 crc kubenswrapper[4833]: I0217 13:45:14.962302 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 13:45:14 crc kubenswrapper[4833]: I0217 13:45:14.979219 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 15:23:17.707551797 +0000 UTC Feb 17 13:45:15 crc kubenswrapper[4833]: I0217 13:45:15.069825 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 13:45:15 crc kubenswrapper[4833]: I0217 13:45:15.076697 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 17 13:45:15 crc kubenswrapper[4833]: I0217 13:45:15.078244 4833 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="babf66dd8ad93003e9766e885c7626d4f6dab5bc46a6d714449a01065d7afea9" exitCode=255 Feb 17 13:45:15 crc kubenswrapper[4833]: I0217 13:45:15.078296 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"babf66dd8ad93003e9766e885c7626d4f6dab5bc46a6d714449a01065d7afea9"} Feb 17 13:45:15 crc kubenswrapper[4833]: I0217 13:45:15.078381 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:45:15 crc kubenswrapper[4833]: I0217 13:45:15.079401 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:15 crc kubenswrapper[4833]: I0217 13:45:15.079438 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:15 crc kubenswrapper[4833]: I0217 13:45:15.079449 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:15 crc kubenswrapper[4833]: I0217 13:45:15.079918 4833 scope.go:117] "RemoveContainer" containerID="babf66dd8ad93003e9766e885c7626d4f6dab5bc46a6d714449a01065d7afea9" Feb 17 13:45:15 crc kubenswrapper[4833]: I0217 13:45:15.080774 4833 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee" exitCode=0 Feb 17 13:45:15 crc kubenswrapper[4833]: I0217 13:45:15.080873 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:45:15 crc kubenswrapper[4833]: I0217 13:45:15.080924 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:45:15 crc kubenswrapper[4833]: I0217 13:45:15.080945 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:45:15 crc kubenswrapper[4833]: I0217 13:45:15.080883 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee"} Feb 17 13:45:15 crc kubenswrapper[4833]: I0217 13:45:15.080877 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:45:15 crc kubenswrapper[4833]: I0217 13:45:15.082196 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:15 crc kubenswrapper[4833]: I0217 13:45:15.082214 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:15 crc kubenswrapper[4833]: I0217 13:45:15.082223 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:15 crc kubenswrapper[4833]: I0217 13:45:15.082721 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:15 crc kubenswrapper[4833]: I0217 13:45:15.082747 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:15 crc kubenswrapper[4833]: I0217 13:45:15.082786 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:15 crc kubenswrapper[4833]: I0217 13:45:15.082784 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:15 crc kubenswrapper[4833]: I0217 13:45:15.082976 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:15 crc kubenswrapper[4833]: I0217 13:45:15.082997 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:15 crc kubenswrapper[4833]: I0217 13:45:15.083791 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:15 crc kubenswrapper[4833]: I0217 13:45:15.083819 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:15 crc kubenswrapper[4833]: I0217 13:45:15.083830 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:15 crc kubenswrapper[4833]: I0217 13:45:15.129522 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:45:15 crc kubenswrapper[4833]: I0217 13:45:15.979770 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 22:06:44.754198073 +0000 UTC Feb 17 13:45:16 crc kubenswrapper[4833]: I0217 13:45:16.085710 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 17 13:45:16 crc kubenswrapper[4833]: I0217 13:45:16.087827 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"436df1e343e5ca9347dd98b9da03c74816b0bc0ec6cecd94dc3b21c38eea9b51"} Feb 17 13:45:16 crc kubenswrapper[4833]: I0217 13:45:16.087895 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:45:16 crc kubenswrapper[4833]: I0217 13:45:16.089194 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:16 crc kubenswrapper[4833]: I0217 13:45:16.089231 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:16 crc kubenswrapper[4833]: I0217 13:45:16.089246 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:16 crc kubenswrapper[4833]: I0217 13:45:16.095951 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"3b9bc54d1057546b9c8d7d1527bb2ae93c1c713b4d6e5879477157af90fc0d89"} Feb 17 13:45:16 crc kubenswrapper[4833]: I0217 13:45:16.095990 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"80bdd44d3b326c9bb61f21b221c63d4647e2cc6edc95fe568b90e76f4329cf1b"} Feb 17 13:45:16 crc kubenswrapper[4833]: I0217 13:45:16.095996 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:45:16 crc kubenswrapper[4833]: I0217 13:45:16.096006 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"2fbc961a623ca1f4237f238856a41acfb724aadd5fa3feda70410c352b6f750c"} Feb 17 13:45:16 crc kubenswrapper[4833]: I0217 13:45:16.096018 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b9a2013b70df34194f895ea7c2e5f2650966fa2d06518733fa4411893e5b2b19"} Feb 17 13:45:16 crc kubenswrapper[4833]: I0217 13:45:16.096029 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6e88eaef317b2beb0c55811634e6752439b7c7725072f55a42f5ab49c2ac2385"} Feb 17 13:45:16 crc kubenswrapper[4833]: I0217 13:45:16.096070 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:45:16 crc kubenswrapper[4833]: I0217 13:45:16.097167 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:16 crc kubenswrapper[4833]: I0217 13:45:16.097195 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:16 crc kubenswrapper[4833]: I0217 13:45:16.097208 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:16 crc kubenswrapper[4833]: I0217 13:45:16.097212 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:16 crc kubenswrapper[4833]: I0217 13:45:16.097253 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:16 crc kubenswrapper[4833]: I0217 13:45:16.097271 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:16 crc kubenswrapper[4833]: I0217 13:45:16.671615 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 13:45:16 crc kubenswrapper[4833]: I0217 13:45:16.671743 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:45:16 crc kubenswrapper[4833]: I0217 13:45:16.672680 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:16 crc kubenswrapper[4833]: I0217 13:45:16.672716 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:16 crc kubenswrapper[4833]: I0217 13:45:16.672725 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:16 crc kubenswrapper[4833]: I0217 13:45:16.677811 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 13:45:16 crc kubenswrapper[4833]: I0217 13:45:16.980786 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 20:47:06.398180955 +0000 UTC Feb 17 13:45:17 crc kubenswrapper[4833]: I0217 13:45:17.098351 4833 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 13:45:17 crc kubenswrapper[4833]: I0217 13:45:17.098369 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:45:17 crc kubenswrapper[4833]: I0217 13:45:17.098395 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:45:17 crc kubenswrapper[4833]: I0217 13:45:17.098464 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:45:17 crc kubenswrapper[4833]: I0217 13:45:17.099363 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:17 crc kubenswrapper[4833]: I0217 13:45:17.099387 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:17 crc kubenswrapper[4833]: I0217 13:45:17.099396 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:17 crc kubenswrapper[4833]: I0217 13:45:17.099615 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:17 crc kubenswrapper[4833]: I0217 13:45:17.099674 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:17 crc kubenswrapper[4833]: I0217 13:45:17.099692 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:17 crc kubenswrapper[4833]: I0217 13:45:17.099748 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:17 crc kubenswrapper[4833]: I0217 13:45:17.099786 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:17 crc kubenswrapper[4833]: I0217 13:45:17.099803 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:17 crc kubenswrapper[4833]: I0217 13:45:17.244328 4833 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 17 13:45:17 crc kubenswrapper[4833]: I0217 13:45:17.414960 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:45:17 crc kubenswrapper[4833]: I0217 13:45:17.416884 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:17 crc kubenswrapper[4833]: I0217 13:45:17.417248 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:17 crc kubenswrapper[4833]: I0217 13:45:17.417456 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:17 crc kubenswrapper[4833]: I0217 13:45:17.417618 4833 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 13:45:17 crc kubenswrapper[4833]: I0217 13:45:17.981170 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 15:40:17.898598945 +0000 UTC Feb 17 13:45:18 crc kubenswrapper[4833]: I0217 13:45:18.357005 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 13:45:18 crc kubenswrapper[4833]: I0217 13:45:18.357247 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:45:18 crc kubenswrapper[4833]: I0217 13:45:18.358946 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:18 crc kubenswrapper[4833]: I0217 13:45:18.359003 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:18 crc kubenswrapper[4833]: I0217 13:45:18.359024 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:18 crc kubenswrapper[4833]: I0217 13:45:18.373679 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:45:18 crc kubenswrapper[4833]: I0217 13:45:18.373887 4833 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 13:45:18 crc kubenswrapper[4833]: I0217 13:45:18.373935 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:45:18 crc kubenswrapper[4833]: I0217 13:45:18.375251 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:18 crc kubenswrapper[4833]: I0217 13:45:18.375289 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:18 crc kubenswrapper[4833]: I0217 13:45:18.375300 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:18 crc kubenswrapper[4833]: I0217 13:45:18.811222 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 13:45:18 crc kubenswrapper[4833]: I0217 13:45:18.982061 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 04:15:44.605052843 +0000 UTC Feb 17 13:45:19 crc kubenswrapper[4833]: I0217 13:45:19.104635 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:45:19 crc kubenswrapper[4833]: I0217 13:45:19.105595 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:19 crc kubenswrapper[4833]: I0217 13:45:19.105619 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:19 crc kubenswrapper[4833]: I0217 13:45:19.105629 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:19 crc kubenswrapper[4833]: I0217 13:45:19.982628 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 21:45:56.157410694 +0000 UTC Feb 17 13:45:20 crc kubenswrapper[4833]: I0217 13:45:20.531133 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 17 13:45:20 crc kubenswrapper[4833]: I0217 13:45:20.531289 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:45:20 crc kubenswrapper[4833]: I0217 13:45:20.532396 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:20 crc kubenswrapper[4833]: I0217 13:45:20.532427 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:20 crc kubenswrapper[4833]: I0217 13:45:20.532437 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:20 crc kubenswrapper[4833]: I0217 13:45:20.699297 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 17 13:45:20 crc kubenswrapper[4833]: I0217 13:45:20.983283 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 05:03:52.24049469 +0000 UTC Feb 17 13:45:21 crc kubenswrapper[4833]: I0217 13:45:21.084870 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:45:21 crc kubenswrapper[4833]: I0217 13:45:21.085308 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:45:21 crc kubenswrapper[4833]: I0217 13:45:21.086938 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:21 crc kubenswrapper[4833]: I0217 13:45:21.087012 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:21 crc kubenswrapper[4833]: I0217 13:45:21.087074 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:21 crc kubenswrapper[4833]: E0217 13:45:21.108758 4833 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 17 13:45:21 crc kubenswrapper[4833]: I0217 13:45:21.108854 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:45:21 crc kubenswrapper[4833]: I0217 13:45:21.111255 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:21 crc kubenswrapper[4833]: I0217 13:45:21.111302 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:21 crc kubenswrapper[4833]: I0217 13:45:21.111323 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:21 crc kubenswrapper[4833]: I0217 13:45:21.811780 4833 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 13:45:21 crc kubenswrapper[4833]: I0217 13:45:21.811915 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 13:45:21 crc kubenswrapper[4833]: I0217 13:45:21.984286 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 05:36:59.61112668 +0000 UTC Feb 17 13:45:22 crc kubenswrapper[4833]: I0217 13:45:22.984935 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 11:28:52.446270914 +0000 UTC Feb 17 13:45:23 crc kubenswrapper[4833]: I0217 13:45:23.985516 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 08:15:50.021651205 +0000 UTC Feb 17 13:45:24 crc kubenswrapper[4833]: I0217 13:45:24.766347 4833 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 17 13:45:24 crc kubenswrapper[4833]: I0217 13:45:24.766431 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 17 13:45:24 crc kubenswrapper[4833]: I0217 13:45:24.770157 4833 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 17 13:45:24 crc kubenswrapper[4833]: I0217 13:45:24.770228 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 17 13:45:24 crc kubenswrapper[4833]: I0217 13:45:24.986047 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 10:54:42.409424161 +0000 UTC Feb 17 13:45:25 crc kubenswrapper[4833]: I0217 13:45:25.137401 4833 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 17 13:45:25 crc kubenswrapper[4833]: [+]log ok Feb 17 13:45:25 crc kubenswrapper[4833]: [+]etcd ok Feb 17 13:45:25 crc kubenswrapper[4833]: [+]poststarthook/openshift.io-api-request-count-filter ok Feb 17 13:45:25 crc kubenswrapper[4833]: [+]poststarthook/openshift.io-startkubeinformers ok Feb 17 13:45:25 crc kubenswrapper[4833]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Feb 17 13:45:25 crc kubenswrapper[4833]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Feb 17 13:45:25 crc kubenswrapper[4833]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 17 13:45:25 crc kubenswrapper[4833]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 17 13:45:25 crc kubenswrapper[4833]: [+]poststarthook/generic-apiserver-start-informers ok Feb 17 13:45:25 crc kubenswrapper[4833]: [+]poststarthook/priority-and-fairness-config-consumer ok Feb 17 13:45:25 crc kubenswrapper[4833]: [+]poststarthook/priority-and-fairness-filter ok Feb 17 13:45:25 crc kubenswrapper[4833]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 17 13:45:25 crc kubenswrapper[4833]: [+]poststarthook/start-apiextensions-informers ok Feb 17 13:45:25 crc kubenswrapper[4833]: [+]poststarthook/start-apiextensions-controllers ok Feb 17 13:45:25 crc kubenswrapper[4833]: [+]poststarthook/crd-informer-synced ok Feb 17 13:45:25 crc kubenswrapper[4833]: [+]poststarthook/start-system-namespaces-controller ok Feb 17 13:45:25 crc kubenswrapper[4833]: [+]poststarthook/start-cluster-authentication-info-controller ok Feb 17 13:45:25 crc kubenswrapper[4833]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Feb 17 13:45:25 crc kubenswrapper[4833]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Feb 17 13:45:25 crc kubenswrapper[4833]: [+]poststarthook/start-legacy-token-tracking-controller ok Feb 17 13:45:25 crc kubenswrapper[4833]: [+]poststarthook/start-service-ip-repair-controllers ok Feb 17 13:45:25 crc kubenswrapper[4833]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Feb 17 13:45:25 crc kubenswrapper[4833]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Feb 17 13:45:25 crc kubenswrapper[4833]: [+]poststarthook/priority-and-fairness-config-producer ok Feb 17 13:45:25 crc kubenswrapper[4833]: [+]poststarthook/bootstrap-controller ok Feb 17 13:45:25 crc kubenswrapper[4833]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Feb 17 13:45:25 crc kubenswrapper[4833]: [+]poststarthook/start-kube-aggregator-informers ok Feb 17 13:45:25 crc kubenswrapper[4833]: [+]poststarthook/apiservice-status-local-available-controller ok Feb 17 13:45:25 crc kubenswrapper[4833]: [+]poststarthook/apiservice-status-remote-available-controller ok Feb 17 13:45:25 crc kubenswrapper[4833]: [+]poststarthook/apiservice-registration-controller ok Feb 17 13:45:25 crc kubenswrapper[4833]: [+]poststarthook/apiservice-wait-for-first-sync ok Feb 17 13:45:25 crc kubenswrapper[4833]: [+]poststarthook/apiservice-discovery-controller ok Feb 17 13:45:25 crc kubenswrapper[4833]: [+]poststarthook/kube-apiserver-autoregistration ok Feb 17 13:45:25 crc kubenswrapper[4833]: [+]autoregister-completion ok Feb 17 13:45:25 crc kubenswrapper[4833]: [+]poststarthook/apiservice-openapi-controller ok Feb 17 13:45:25 crc kubenswrapper[4833]: [+]poststarthook/apiservice-openapiv3-controller ok Feb 17 13:45:25 crc kubenswrapper[4833]: livez check failed Feb 17 13:45:25 crc kubenswrapper[4833]: I0217 13:45:25.137473 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 13:45:25 crc kubenswrapper[4833]: I0217 13:45:25.987083 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 06:39:52.050055864 +0000 UTC Feb 17 13:45:26 crc kubenswrapper[4833]: I0217 13:45:26.987640 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 05:25:01.428582276 +0000 UTC Feb 17 13:45:27 crc kubenswrapper[4833]: I0217 13:45:27.988639 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 12:14:20.839683474 +0000 UTC Feb 17 13:45:28 crc kubenswrapper[4833]: I0217 13:45:28.362440 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 13:45:28 crc kubenswrapper[4833]: I0217 13:45:28.362581 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:45:28 crc kubenswrapper[4833]: I0217 13:45:28.363682 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:28 crc kubenswrapper[4833]: I0217 13:45:28.363738 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:28 crc kubenswrapper[4833]: I0217 13:45:28.363756 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:28 crc kubenswrapper[4833]: I0217 13:45:28.788290 4833 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 17 13:45:28 crc kubenswrapper[4833]: I0217 13:45:28.788359 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 17 13:45:28 crc kubenswrapper[4833]: I0217 13:45:28.988981 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 00:34:01.446633015 +0000 UTC Feb 17 13:45:29 crc kubenswrapper[4833]: E0217 13:45:29.757216 4833 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 17 13:45:29 crc kubenswrapper[4833]: I0217 13:45:29.759408 4833 trace.go:236] Trace[681164029]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (17-Feb-2026 13:45:18.028) (total time: 11730ms): Feb 17 13:45:29 crc kubenswrapper[4833]: Trace[681164029]: ---"Objects listed" error: 11730ms (13:45:29.759) Feb 17 13:45:29 crc kubenswrapper[4833]: Trace[681164029]: [11.730637818s] [11.730637818s] END Feb 17 13:45:29 crc kubenswrapper[4833]: I0217 13:45:29.759445 4833 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 17 13:45:29 crc kubenswrapper[4833]: I0217 13:45:29.760238 4833 trace.go:236] Trace[1584857083]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (17-Feb-2026 13:45:19.486) (total time: 10273ms): Feb 17 13:45:29 crc kubenswrapper[4833]: Trace[1584857083]: ---"Objects listed" error: 10273ms (13:45:29.760) Feb 17 13:45:29 crc kubenswrapper[4833]: Trace[1584857083]: [10.273900401s] [10.273900401s] END Feb 17 13:45:29 crc kubenswrapper[4833]: I0217 13:45:29.760260 4833 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 17 13:45:29 crc kubenswrapper[4833]: E0217 13:45:29.761457 4833 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 17 13:45:29 crc kubenswrapper[4833]: I0217 13:45:29.761650 4833 trace.go:236] Trace[632108787]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (17-Feb-2026 13:45:15.113) (total time: 14647ms): Feb 17 13:45:29 crc kubenswrapper[4833]: Trace[632108787]: ---"Objects listed" error: 14647ms (13:45:29.760) Feb 17 13:45:29 crc kubenswrapper[4833]: Trace[632108787]: [14.647465936s] [14.647465936s] END Feb 17 13:45:29 crc kubenswrapper[4833]: I0217 13:45:29.761668 4833 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 17 13:45:29 crc kubenswrapper[4833]: I0217 13:45:29.762684 4833 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 17 13:45:29 crc kubenswrapper[4833]: I0217 13:45:29.766115 4833 trace.go:236] Trace[21367255]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (17-Feb-2026 13:45:15.327) (total time: 14438ms): Feb 17 13:45:29 crc kubenswrapper[4833]: Trace[21367255]: ---"Objects listed" error: 14438ms (13:45:29.765) Feb 17 13:45:29 crc kubenswrapper[4833]: Trace[21367255]: [14.438331209s] [14.438331209s] END Feb 17 13:45:29 crc kubenswrapper[4833]: I0217 13:45:29.766148 4833 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 17 13:45:29 crc kubenswrapper[4833]: I0217 13:45:29.781744 4833 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 17 13:45:29 crc kubenswrapper[4833]: I0217 13:45:29.793283 4833 csr.go:261] certificate signing request csr-c2zqr is approved, waiting to be issued Feb 17 13:45:29 crc kubenswrapper[4833]: I0217 13:45:29.800484 4833 csr.go:257] certificate signing request csr-c2zqr is issued Feb 17 13:45:29 crc kubenswrapper[4833]: I0217 13:45:29.826224 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 13:45:29 crc kubenswrapper[4833]: I0217 13:45:29.830608 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 13:45:29 crc kubenswrapper[4833]: I0217 13:45:29.971892 4833 apiserver.go:52] "Watching apiserver" Feb 17 13:45:29 crc kubenswrapper[4833]: I0217 13:45:29.977341 4833 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 17 13:45:29 crc kubenswrapper[4833]: I0217 13:45:29.977652 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-vx9xx","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Feb 17 13:45:29 crc kubenswrapper[4833]: I0217 13:45:29.977979 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:45:29 crc kubenswrapper[4833]: I0217 13:45:29.978030 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 13:45:29 crc kubenswrapper[4833]: I0217 13:45:29.978141 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 13:45:29 crc kubenswrapper[4833]: I0217 13:45:29.978188 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:45:29 crc kubenswrapper[4833]: E0217 13:45:29.978245 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:45:29 crc kubenswrapper[4833]: E0217 13:45:29.978352 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:45:29 crc kubenswrapper[4833]: I0217 13:45:29.978444 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 13:45:29 crc kubenswrapper[4833]: I0217 13:45:29.978892 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-vx9xx" Feb 17 13:45:29 crc kubenswrapper[4833]: I0217 13:45:29.978905 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:45:29 crc kubenswrapper[4833]: E0217 13:45:29.979109 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:45:29 crc kubenswrapper[4833]: I0217 13:45:29.980578 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 17 13:45:29 crc kubenswrapper[4833]: I0217 13:45:29.981085 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 17 13:45:29 crc kubenswrapper[4833]: I0217 13:45:29.981097 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 17 13:45:29 crc kubenswrapper[4833]: I0217 13:45:29.981307 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 17 13:45:29 crc kubenswrapper[4833]: I0217 13:45:29.981475 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 17 13:45:29 crc kubenswrapper[4833]: I0217 13:45:29.981548 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 17 13:45:29 crc kubenswrapper[4833]: I0217 13:45:29.981810 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 17 13:45:29 crc kubenswrapper[4833]: I0217 13:45:29.981933 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 17 13:45:29 crc kubenswrapper[4833]: I0217 13:45:29.982259 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 17 13:45:29 crc kubenswrapper[4833]: I0217 13:45:29.982353 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 17 13:45:29 crc kubenswrapper[4833]: I0217 13:45:29.982375 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 17 13:45:29 crc kubenswrapper[4833]: I0217 13:45:29.983571 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 17 13:45:29 crc kubenswrapper[4833]: I0217 13:45:29.986054 4833 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 17 13:45:29 crc kubenswrapper[4833]: I0217 13:45:29.989138 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 01:08:44.6090246 +0000 UTC Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.002641 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.018193 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.031346 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.045145 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.056574 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c1afc7-6071-464b-aedc-11234d869afe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e36c4174a69c22c608db515a3fe558ec0292c53ae56a43a945c8bb19bf4cdf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bb5bc7e6f182dd6271a6d16d8441e1f6833a32fd03858a3b91d16a896339c2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e477bf1f25d554946b3657a5bd18f922b18483902fd4f9c05a452a299cb4d920\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2260fe18b8f9f829f7dde85c9bdfc32895c8bfe48e0986c8f8205708e078ff37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.064239 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.064299 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.064328 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.064353 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.064377 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.064406 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.064456 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.064481 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.064508 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.064526 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.064591 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.064603 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.064755 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.064875 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.064927 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.064939 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.064987 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.065088 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.065145 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.065172 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.065192 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.065215 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.065237 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.065260 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.065283 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.065306 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.065353 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.065378 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.065414 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.065435 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.065456 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.065477 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.065500 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.065523 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.065547 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.065573 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.065594 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.065615 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.065634 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.065656 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.065678 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.065702 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.065723 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.065746 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.065769 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.065801 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.065823 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.065851 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.065875 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.065898 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.065919 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.065943 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.065968 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.065989 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.066011 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.066048 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.066071 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.066112 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.066137 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.066158 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.066181 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.066202 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.066224 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.066245 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.066267 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.066288 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.066314 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.066351 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.066375 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.066398 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.066423 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.066445 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.066468 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.066493 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.066516 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.066538 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.066561 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.066582 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.066602 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.066623 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.066645 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.066667 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.066688 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.066710 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.066734 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.066755 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.066784 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.066805 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.066828 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.066853 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.066875 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.066897 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.066919 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.066941 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.066964 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.066989 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.067010 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.067047 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.067072 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.067096 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.067118 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.067139 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.067162 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.067185 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.067208 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.067233 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.067257 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.067278 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.067300 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.067323 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.067344 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.067366 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.067388 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.067491 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.067515 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.067540 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.067565 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.067593 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.067615 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.067638 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.067660 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.067684 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.067710 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.067732 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.067755 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.067777 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.067800 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.067823 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.067845 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.067867 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.067891 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.067913 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.067937 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.067960 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.067982 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.068014 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.068229 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.068279 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.068305 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.068329 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.068353 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.068379 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.068407 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.068433 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.068457 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.068480 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.068503 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.068526 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.068548 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.068574 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.068599 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.068622 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.068647 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.068682 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.068710 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.068734 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.068758 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.068803 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.068829 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.068853 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.068876 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.068900 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.068924 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.068947 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.068972 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.068997 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.069021 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.069061 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.069086 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.069114 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.069138 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.069163 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.069185 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.069210 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.069235 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.069259 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.069285 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.069309 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.069333 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.069357 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.069381 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.069408 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.069432 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.069456 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.069482 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.069508 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.069532 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.069558 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.069583 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.069608 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.069633 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.069656 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.069681 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.069705 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.069730 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.069754 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.069779 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.069804 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.069829 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.069854 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.069878 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.069905 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.069930 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.069955 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.070003 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.070029 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.070071 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.070105 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w7kx\" (UniqueName: \"kubernetes.io/projected/d13dbe6c-a57b-4011-9987-193ccf4939f6-kube-api-access-2w7kx\") pod \"node-resolver-vx9xx\" (UID: \"d13dbe6c-a57b-4011-9987-193ccf4939f6\") " pod="openshift-dns/node-resolver-vx9xx" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.070134 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.070158 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.070187 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.070213 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.070235 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.070265 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.070288 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/d13dbe6c-a57b-4011-9987-193ccf4939f6-hosts-file\") pod \"node-resolver-vx9xx\" (UID: \"d13dbe6c-a57b-4011-9987-193ccf4939f6\") " pod="openshift-dns/node-resolver-vx9xx" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.070315 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.070340 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.070368 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.070395 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.070419 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.070471 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.070487 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.070505 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.070520 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.070536 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.070550 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.070564 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.070577 4833 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.070592 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.071120 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.071936 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.072346 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.073019 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.075841 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: E0217 13:45:30.075959 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:45:30.575931211 +0000 UTC m=+20.211030644 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.075999 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.076077 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.076092 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.076183 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.076229 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.076290 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.076332 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.076414 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.076566 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.076641 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.076854 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.077228 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.077363 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.077363 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.077486 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.077590 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.077802 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.078115 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.078200 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.078243 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.078325 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.078462 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.078517 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.078593 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.078624 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.078676 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.078669 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.078778 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.078825 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.078964 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.078993 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.079061 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.079148 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.079184 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.079241 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.079283 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.079574 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.079608 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.079663 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.080089 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.080288 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.080462 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.080472 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.080918 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.081083 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.081202 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.081461 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.081880 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.082326 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.082400 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.082625 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.082632 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.082831 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.082840 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.083141 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.083143 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.083161 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.083186 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.083202 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.083289 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.083480 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.083525 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.083541 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.083572 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.083584 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.083768 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.083897 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.083912 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.084297 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.084366 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.084476 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.084559 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.084622 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.084706 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.084739 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.084891 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.084915 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.085263 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.085887 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.089349 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.089563 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.089602 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.089945 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.090012 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.090021 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.090119 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.090256 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.090293 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.090371 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.090443 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.090496 4833 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.090509 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.090595 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.090617 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.091271 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.091295 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.091300 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.091320 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.091449 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.091751 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.092092 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.092244 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.092265 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.092344 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.092394 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.092401 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.092572 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.092668 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.092710 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.092926 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.092932 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.092943 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.093097 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.093114 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.093123 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.093565 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.093561 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.093979 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.094106 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.094162 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.094167 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.094270 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.094349 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.094405 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.094588 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.094717 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.094723 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.094789 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.094801 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.094975 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.094985 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.095240 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.095657 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.095789 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.095824 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.095977 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.096144 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.096305 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.096426 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.096586 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.096910 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.097473 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.097868 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.098215 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.098774 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.099243 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.099414 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: E0217 13:45:30.099511 4833 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 13:45:30 crc kubenswrapper[4833]: E0217 13:45:30.099577 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 13:45:30.599556601 +0000 UTC m=+20.234656124 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.100106 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 13:45:30 crc kubenswrapper[4833]: E0217 13:45:30.100214 4833 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 13:45:30 crc kubenswrapper[4833]: E0217 13:45:30.100265 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 13:45:30.600247137 +0000 UTC m=+20.235346570 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.100648 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.100942 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.101416 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.101544 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.101800 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.102002 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.102113 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.102374 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.102442 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.102646 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.102751 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.102895 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.102930 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.103018 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.103750 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.109643 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.109719 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.109874 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.110008 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: E0217 13:45:30.113526 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 13:45:30 crc kubenswrapper[4833]: E0217 13:45:30.113551 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 13:45:30 crc kubenswrapper[4833]: E0217 13:45:30.113563 4833 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 13:45:30 crc kubenswrapper[4833]: E0217 13:45:30.113612 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 13:45:30.613595559 +0000 UTC m=+20.248694992 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.115192 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.116080 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 13:45:30 crc kubenswrapper[4833]: E0217 13:45:30.117580 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 13:45:30 crc kubenswrapper[4833]: E0217 13:45:30.117602 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 13:45:30 crc kubenswrapper[4833]: E0217 13:45:30.117611 4833 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 13:45:30 crc kubenswrapper[4833]: E0217 13:45:30.117643 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 13:45:30.617633267 +0000 UTC m=+20.252732690 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.117817 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.118512 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vx9xx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d13dbe6c-a57b-4011-9987-193ccf4939f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w7kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vx9xx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.120494 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.120513 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.122346 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.125532 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.135387 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.136140 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.138485 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.138841 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.138962 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.140564 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.143836 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.146548 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.146721 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.150276 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.152273 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.154454 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:45:30 crc kubenswrapper[4833]: E0217 13:45:30.154656 4833 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.155220 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.155675 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.157405 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.157665 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.170726 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.171072 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.171535 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2w7kx\" (UniqueName: \"kubernetes.io/projected/d13dbe6c-a57b-4011-9987-193ccf4939f6-kube-api-access-2w7kx\") pod \"node-resolver-vx9xx\" (UID: \"d13dbe6c-a57b-4011-9987-193ccf4939f6\") " pod="openshift-dns/node-resolver-vx9xx" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.171572 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.171605 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.171621 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/d13dbe6c-a57b-4011-9987-193ccf4939f6-hosts-file\") pod \"node-resolver-vx9xx\" (UID: \"d13dbe6c-a57b-4011-9987-193ccf4939f6\") " pod="openshift-dns/node-resolver-vx9xx" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.171710 4833 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.171727 4833 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.171737 4833 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.171748 4833 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.171757 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.171769 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.171779 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.171789 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.171800 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.171810 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.171818 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.171827 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.171838 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.171847 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.171857 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.171870 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.171881 4833 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.171890 4833 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.171900 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.171908 4833 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.171919 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.171927 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.171936 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.171946 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.171954 4833 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.171963 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.171971 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.171981 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.171990 4833 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.171998 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172006 4833 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172017 4833 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172026 4833 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172034 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172063 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172076 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172086 4833 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172095 4833 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172106 4833 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172114 4833 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172123 4833 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172132 4833 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172143 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172152 4833 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172163 4833 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172175 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172190 4833 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172202 4833 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172212 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172224 4833 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172232 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172241 4833 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172251 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172262 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172271 4833 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172280 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172288 4833 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172298 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172307 4833 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172316 4833 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172326 4833 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172339 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172347 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172357 4833 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172371 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172381 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172399 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172409 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172419 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172428 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172436 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172447 4833 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172459 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172467 4833 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172476 4833 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172488 4833 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172496 4833 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172506 4833 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172518 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172530 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172538 4833 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172547 4833 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172556 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172567 4833 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172576 4833 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172584 4833 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172593 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172595 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172603 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172654 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/d13dbe6c-a57b-4011-9987-193ccf4939f6-hosts-file\") pod \"node-resolver-vx9xx\" (UID: \"d13dbe6c-a57b-4011-9987-193ccf4939f6\") " pod="openshift-dns/node-resolver-vx9xx" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172676 4833 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172695 4833 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172749 4833 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172766 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172779 4833 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172795 4833 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172809 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172821 4833 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172832 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172847 4833 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172859 4833 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172872 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172886 4833 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172903 4833 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172915 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172926 4833 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172940 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172952 4833 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172964 4833 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172960 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172976 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.172999 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.173014 4833 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.173034 4833 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.179146 4833 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.179184 4833 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.179203 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.179213 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.179223 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.179232 4833 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.179246 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.179255 4833 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.179265 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.179273 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.179286 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.179294 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.179304 4833 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.179313 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.179326 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.179336 4833 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.179346 4833 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.179357 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.179368 4833 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.179378 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.179388 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.179400 4833 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.179410 4833 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.179420 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.179430 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.179446 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.179456 4833 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.179466 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.179476 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.179489 4833 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.179498 4833 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.179507 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.179519 4833 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.179527 4833 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.179536 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.179545 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.179561 4833 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.179570 4833 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.179579 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.179587 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.179598 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.179607 4833 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.179616 4833 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.179627 4833 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.179636 4833 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.179645 4833 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.179654 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.179665 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.179674 4833 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.179682 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.179691 4833 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.179703 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.179712 4833 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.179721 4833 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.179730 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.179740 4833 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.179749 4833 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.179758 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.179770 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.179779 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.179788 4833 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.179797 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.179808 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.179816 4833 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.179824 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.179832 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.179843 4833 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.179851 4833 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.179859 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.179870 4833 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.179879 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.179887 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.173329 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.187945 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.190651 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.193880 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.197577 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2w7kx\" (UniqueName: \"kubernetes.io/projected/d13dbe6c-a57b-4011-9987-193ccf4939f6-kube-api-access-2w7kx\") pod \"node-resolver-vx9xx\" (UID: \"d13dbe6c-a57b-4011-9987-193ccf4939f6\") " pod="openshift-dns/node-resolver-vx9xx" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.208405 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.220591 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.230803 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.240732 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.250964 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.272058 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.280014 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23fd4be9-debc-405d-ac05-5a5160593231\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f18bb5a144ef41de921b7b92ee560b76e3f9f9a626be4d5638db46e4688d256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84373535efe637bbd5cf8649523941f0e73ad51f06dd45c913741e2a8e7b775f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090ad37f43c1a891b1a2922b2a367e1b52ce56fd6a53e9e749f819bf281136a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://436df1e343e5ca9347dd98b9da03c74816b0bc0ec6cecd94dc3b21c38eea9b51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babf66dd8ad93003e9766e885c7626d4f6dab5bc46a6d714449a01065d7afea9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"message\\\":\\\"W0217 13:45:14.176886 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 13:45:14.177326 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771335914 cert, and key in /tmp/serving-cert-2529326345/serving-signer.crt, /tmp/serving-cert-2529326345/serving-signer.key\\\\nI0217 13:45:14.372413 1 observer_polling.go:159] Starting file observer\\\\nW0217 13:45:14.378661 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 13:45:14.378888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:45:14.382093 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529326345/tls.crt::/tmp/serving-cert-2529326345/tls.key\\\\\\\"\\\\nF0217 13:45:14.604297 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0450fd6122e12877e07084f693f67e63ac70bcecc962f7da5a32241ba14553\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.280567 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.280603 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.292579 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.297106 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c1afc7-6071-464b-aedc-11234d869afe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e36c4174a69c22c608db515a3fe558ec0292c53ae56a43a945c8bb19bf4cdf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bb5bc7e6f182dd6271a6d16d8441e1f6833a32fd03858a3b91d16a896339c2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e477bf1f25d554946b3657a5bd18f922b18483902fd4f9c05a452a299cb4d920\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2260fe18b8f9f829f7dde85c9bdfc32895c8bfe48e0986c8f8205708e078ff37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.304499 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.313123 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.317876 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.326742 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-vx9xx" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.328306 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.342133 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vx9xx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d13dbe6c-a57b-4011-9987-193ccf4939f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w7kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vx9xx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.362354 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.380677 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:45:30 crc kubenswrapper[4833]: W0217 13:45:30.403333 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd13dbe6c_a57b_4011_9987_193ccf4939f6.slice/crio-67c063822be09de67792c1728ae9078ac1b9bc074a86d7763c2b914fd8cb5626 WatchSource:0}: Error finding container 67c063822be09de67792c1728ae9078ac1b9bc074a86d7763c2b914fd8cb5626: Status 404 returned error can't find the container with id 67c063822be09de67792c1728ae9078ac1b9bc074a86d7763c2b914fd8cb5626 Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.416376 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.443221 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.453472 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c1afc7-6071-464b-aedc-11234d869afe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e36c4174a69c22c608db515a3fe558ec0292c53ae56a43a945c8bb19bf4cdf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bb5bc7e6f182dd6271a6d16d8441e1f6833a32fd03858a3b91d16a896339c2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e477bf1f25d554946b3657a5bd18f922b18483902fd4f9c05a452a299cb4d920\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2260fe18b8f9f829f7dde85c9bdfc32895c8bfe48e0986c8f8205708e078ff37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.462750 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.475196 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.481549 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vx9xx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d13dbe6c-a57b-4011-9987-193ccf4939f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w7kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vx9xx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.498251 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23fd4be9-debc-405d-ac05-5a5160593231\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f18bb5a144ef41de921b7b92ee560b76e3f9f9a626be4d5638db46e4688d256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84373535efe637bbd5cf8649523941f0e73ad51f06dd45c913741e2a8e7b775f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090ad37f43c1a891b1a2922b2a367e1b52ce56fd6a53e9e749f819bf281136a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://436df1e343e5ca9347dd98b9da03c74816b0bc0ec6cecd94dc3b21c38eea9b51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babf66dd8ad93003e9766e885c7626d4f6dab5bc46a6d714449a01065d7afea9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"message\\\":\\\"W0217 13:45:14.176886 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 13:45:14.177326 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771335914 cert, and key in /tmp/serving-cert-2529326345/serving-signer.crt, /tmp/serving-cert-2529326345/serving-signer.key\\\\nI0217 13:45:14.372413 1 observer_polling.go:159] Starting file observer\\\\nW0217 13:45:14.378661 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 13:45:14.378888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:45:14.382093 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529326345/tls.crt::/tmp/serving-cert-2529326345/tls.key\\\\\\\"\\\\nF0217 13:45:14.604297 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0450fd6122e12877e07084f693f67e63ac70bcecc962f7da5a32241ba14553\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.508646 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.556959 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.568997 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.575089 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.575639 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.583352 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:45:30 crc kubenswrapper[4833]: E0217 13:45:30.583512 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:45:31.583489762 +0000 UTC m=+21.218589195 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.583347 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vx9xx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d13dbe6c-a57b-4011-9987-193ccf4939f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w7kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vx9xx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.593544 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23fd4be9-debc-405d-ac05-5a5160593231\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f18bb5a144ef41de921b7b92ee560b76e3f9f9a626be4d5638db46e4688d256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84373535efe637bbd5cf8649523941f0e73ad51f06dd45c913741e2a8e7b775f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090ad37f43c1a891b1a2922b2a367e1b52ce56fd6a53e9e749f819bf281136a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://436df1e343e5ca9347dd98b9da03c74816b0bc0ec6cecd94dc3b21c38eea9b51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babf66dd8ad93003e9766e885c7626d4f6dab5bc46a6d714449a01065d7afea9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"message\\\":\\\"W0217 13:45:14.176886 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 13:45:14.177326 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771335914 cert, and key in /tmp/serving-cert-2529326345/serving-signer.crt, /tmp/serving-cert-2529326345/serving-signer.key\\\\nI0217 13:45:14.372413 1 observer_polling.go:159] Starting file observer\\\\nW0217 13:45:14.378661 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 13:45:14.378888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:45:14.382093 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529326345/tls.crt::/tmp/serving-cert-2529326345/tls.key\\\\\\\"\\\\nF0217 13:45:14.604297 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0450fd6122e12877e07084f693f67e63ac70bcecc962f7da5a32241ba14553\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.605589 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c1afc7-6071-464b-aedc-11234d869afe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e36c4174a69c22c608db515a3fe558ec0292c53ae56a43a945c8bb19bf4cdf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bb5bc7e6f182dd6271a6d16d8441e1f6833a32fd03858a3b91d16a896339c2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e477bf1f25d554946b3657a5bd18f922b18483902fd4f9c05a452a299cb4d920\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2260fe18b8f9f829f7dde85c9bdfc32895c8bfe48e0986c8f8205708e078ff37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.616373 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.654320 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.676439 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.684503 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.684549 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.684577 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.684606 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:45:30 crc kubenswrapper[4833]: E0217 13:45:30.684680 4833 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 13:45:30 crc kubenswrapper[4833]: E0217 13:45:30.684719 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 13:45:30 crc kubenswrapper[4833]: E0217 13:45:30.684726 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 13:45:30 crc kubenswrapper[4833]: E0217 13:45:30.684751 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 13:45:30 crc kubenswrapper[4833]: E0217 13:45:30.684759 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 13:45:30 crc kubenswrapper[4833]: E0217 13:45:30.684764 4833 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 13:45:30 crc kubenswrapper[4833]: E0217 13:45:30.684767 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 13:45:31.684745604 +0000 UTC m=+21.319845037 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 13:45:30 crc kubenswrapper[4833]: E0217 13:45:30.684774 4833 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 13:45:30 crc kubenswrapper[4833]: E0217 13:45:30.684688 4833 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 13:45:30 crc kubenswrapper[4833]: E0217 13:45:30.684818 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 13:45:31.684802806 +0000 UTC m=+21.319902239 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 13:45:30 crc kubenswrapper[4833]: E0217 13:45:30.684839 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 13:45:31.684830396 +0000 UTC m=+21.319929829 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 13:45:30 crc kubenswrapper[4833]: E0217 13:45:30.684850 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 13:45:31.684843957 +0000 UTC m=+21.319943390 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.691106 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.705878 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.714913 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23fd4be9-debc-405d-ac05-5a5160593231\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f18bb5a144ef41de921b7b92ee560b76e3f9f9a626be4d5638db46e4688d256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84373535efe637bbd5cf8649523941f0e73ad51f06dd45c913741e2a8e7b775f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090ad37f43c1a891b1a2922b2a367e1b52ce56fd6a53e9e749f819bf281136a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://436df1e343e5ca9347dd98b9da03c74816b0bc0ec6cecd94dc3b21c38eea9b51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babf66dd8ad93003e9766e885c7626d4f6dab5bc46a6d714449a01065d7afea9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"message\\\":\\\"W0217 13:45:14.176886 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 13:45:14.177326 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771335914 cert, and key in /tmp/serving-cert-2529326345/serving-signer.crt, /tmp/serving-cert-2529326345/serving-signer.key\\\\nI0217 13:45:14.372413 1 observer_polling.go:159] Starting file observer\\\\nW0217 13:45:14.378661 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 13:45:14.378888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:45:14.382093 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529326345/tls.crt::/tmp/serving-cert-2529326345/tls.key\\\\\\\"\\\\nF0217 13:45:14.604297 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0450fd6122e12877e07084f693f67e63ac70bcecc962f7da5a32241ba14553\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.729756 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e08f74a-75d7-4dce-9297-27140e91962a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a2013b70df34194f895ea7c2e5f2650966fa2d06518733fa4411893e5b2b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbc961a623ca1f4237f238856a41acfb724aadd5fa3feda70410c352b6f750c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80bdd44d3b326c9bb61f21b221c63d4647e2cc6edc95fe568b90e76f4329cf1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9bc54d1057546b9c8d7d1527bb2ae93c1c713b4d6e5879477157af90fc0d89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e88eaef317b2beb0c55811634e6752439b7c7725072f55a42f5ab49c2ac2385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.737818 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c1afc7-6071-464b-aedc-11234d869afe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e36c4174a69c22c608db515a3fe558ec0292c53ae56a43a945c8bb19bf4cdf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bb5bc7e6f182dd6271a6d16d8441e1f6833a32fd03858a3b91d16a896339c2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e477bf1f25d554946b3657a5bd18f922b18483902fd4f9c05a452a299cb4d920\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2260fe18b8f9f829f7dde85c9bdfc32895c8bfe48e0986c8f8205708e078ff37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.748241 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.755575 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.763454 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vx9xx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d13dbe6c-a57b-4011-9987-193ccf4939f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w7kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vx9xx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.773500 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.783921 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.792609 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.801808 4833 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-17 13:40:29 +0000 UTC, rotation deadline is 2026-11-04 22:12:09.183806049 +0000 UTC Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.801883 4833 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6248h26m38.38192631s for next certificate rotation Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.806293 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.834278 4833 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 17 13:45:30 crc kubenswrapper[4833]: W0217 13:45:30.834491 4833 reflector.go:484] object-"openshift-network-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Feb 17 13:45:30 crc kubenswrapper[4833]: W0217 13:45:30.834645 4833 reflector.go:484] object-"openshift-dns"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Feb 17 13:45:30 crc kubenswrapper[4833]: W0217 13:45:30.834666 4833 reflector.go:484] object-"openshift-dns"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Feb 17 13:45:30 crc kubenswrapper[4833]: W0217 13:45:30.834684 4833 reflector.go:484] object-"openshift-network-node-identity"/"env-overrides": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"env-overrides": Unexpected watch close - watch lasted less than a second and no items received Feb 17 13:45:30 crc kubenswrapper[4833]: W0217 13:45:30.834689 4833 reflector.go:484] object-"openshift-network-node-identity"/"network-node-identity-cert": watch of *v1.Secret ended with: very short watch: object-"openshift-network-node-identity"/"network-node-identity-cert": Unexpected watch close - watch lasted less than a second and no items received Feb 17 13:45:30 crc kubenswrapper[4833]: W0217 13:45:30.834647 4833 reflector.go:484] object-"openshift-network-operator"/"metrics-tls": watch of *v1.Secret ended with: very short watch: object-"openshift-network-operator"/"metrics-tls": Unexpected watch close - watch lasted less than a second and no items received Feb 17 13:45:30 crc kubenswrapper[4833]: W0217 13:45:30.834709 4833 reflector.go:484] object-"openshift-network-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Feb 17 13:45:30 crc kubenswrapper[4833]: W0217 13:45:30.834700 4833 reflector.go:484] object-"openshift-network-node-identity"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Feb 17 13:45:30 crc kubenswrapper[4833]: W0217 13:45:30.834729 4833 reflector.go:484] object-"openshift-network-node-identity"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Feb 17 13:45:30 crc kubenswrapper[4833]: W0217 13:45:30.834726 4833 reflector.go:484] object-"openshift-network-node-identity"/"ovnkube-identity-cm": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"ovnkube-identity-cm": Unexpected watch close - watch lasted less than a second and no items received Feb 17 13:45:30 crc kubenswrapper[4833]: W0217 13:45:30.834749 4833 reflector.go:484] object-"openshift-network-operator"/"iptables-alerter-script": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"iptables-alerter-script": Unexpected watch close - watch lasted less than a second and no items received Feb 17 13:45:30 crc kubenswrapper[4833]: W0217 13:45:30.835075 4833 reflector.go:484] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": watch of *v1.Secret ended with: very short watch: object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": Unexpected watch close - watch lasted less than a second and no items received Feb 17 13:45:30 crc kubenswrapper[4833]: I0217 13:45:30.990110 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 16:04:31.942963857 +0000 UTC Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.040697 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:45:31 crc kubenswrapper[4833]: E0217 13:45:31.040816 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.045254 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.046071 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.047102 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.048126 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.048913 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.049003 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vx9xx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d13dbe6c-a57b-4011-9987-193ccf4939f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w7kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vx9xx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.049613 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.050316 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.051002 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.051770 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.052431 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.053098 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.053883 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.054525 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.057223 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.058102 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.058378 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23fd4be9-debc-405d-ac05-5a5160593231\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f18bb5a144ef41de921b7b92ee560b76e3f9f9a626be4d5638db46e4688d256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84373535efe637bbd5cf8649523941f0e73ad51f06dd45c913741e2a8e7b775f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090ad37f43c1a891b1a2922b2a367e1b52ce56fd6a53e9e749f819bf281136a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://436df1e343e5ca9347dd98b9da03c74816b0bc0ec6cecd94dc3b21c38eea9b51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babf66dd8ad93003e9766e885c7626d4f6dab5bc46a6d714449a01065d7afea9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"message\\\":\\\"W0217 13:45:14.176886 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 13:45:14.177326 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771335914 cert, and key in /tmp/serving-cert-2529326345/serving-signer.crt, /tmp/serving-cert-2529326345/serving-signer.key\\\\nI0217 13:45:14.372413 1 observer_polling.go:159] Starting file observer\\\\nW0217 13:45:14.378661 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 13:45:14.378888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:45:14.382093 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529326345/tls.crt::/tmp/serving-cert-2529326345/tls.key\\\\\\\"\\\\nF0217 13:45:14.604297 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0450fd6122e12877e07084f693f67e63ac70bcecc962f7da5a32241ba14553\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.059562 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.060629 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.061168 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.062347 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.062973 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.063501 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.064553 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.065023 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.066242 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.066845 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.068390 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.069235 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.069743 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.070781 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.071444 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.072943 4833 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.073167 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.074877 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e08f74a-75d7-4dce-9297-27140e91962a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a2013b70df34194f895ea7c2e5f2650966fa2d06518733fa4411893e5b2b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbc961a623ca1f4237f238856a41acfb724aadd5fa3feda70410c352b6f750c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80bdd44d3b326c9bb61f21b221c63d4647e2cc6edc95fe568b90e76f4329cf1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9bc54d1057546b9c8d7d1527bb2ae93c1c713b4d6e5879477157af90fc0d89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e88eaef317b2beb0c55811634e6752439b7c7725072f55a42f5ab49c2ac2385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.075467 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.076573 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.077311 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.079100 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.080066 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.080870 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.081806 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.082838 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.086865 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.087726 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.089156 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c1afc7-6071-464b-aedc-11234d869afe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e36c4174a69c22c608db515a3fe558ec0292c53ae56a43a945c8bb19bf4cdf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bb5bc7e6f182dd6271a6d16d8441e1f6833a32fd03858a3b91d16a896339c2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e477bf1f25d554946b3657a5bd18f922b18483902fd4f9c05a452a299cb4d920\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2260fe18b8f9f829f7dde85c9bdfc32895c8bfe48e0986c8f8205708e078ff37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.089256 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.090592 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.091328 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.092717 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.093598 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.095173 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.095905 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.097392 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.098179 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.099195 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.100577 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.101356 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.106415 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.127421 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.139083 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-vx9xx" event={"ID":"d13dbe6c-a57b-4011-9987-193ccf4939f6","Type":"ContainerStarted","Data":"43d6637ecd7299bd61ba9b47f093048b122c90fc98990eb22155e8467c1dfd75"} Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.139148 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-vx9xx" event={"ID":"d13dbe6c-a57b-4011-9987-193ccf4939f6","Type":"ContainerStarted","Data":"67c063822be09de67792c1728ae9078ac1b9bc074a86d7763c2b914fd8cb5626"} Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.140881 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"74fe5143b29116b6890e77cee1e9c739d2a37a3ee2156b96039fc1a3f8abba8e"} Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.142375 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"9e5a2da13ff704ed5e89100bd3a88a6349d446c2af48f88acb399efb43dc42d8"} Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.142543 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"ac53462f7a94ff04dc94b5065169da7ec43985a5a7e7dba779f23614b61b78fa"} Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.142660 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"55eb18aa5fc77cc695a1636723b6e8c11af35b4de2a5e9dcfd2c009fc35db43b"} Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.145510 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"1a207ffb6a17f606fde32eec5ae4222e2fdf02d79f61e4f959be55814c14daf8"} Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.146068 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"d9be57e838a7b1b8151a06bc06b9406640d5357ad023a1bc7795b1892d3a09d3"} Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.148109 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.164539 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.176142 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.186550 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.196724 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.216275 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e5a2da13ff704ed5e89100bd3a88a6349d446c2af48f88acb399efb43dc42d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac53462f7a94ff04dc94b5065169da7ec43985a5a7e7dba779f23614b61b78fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.230353 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.242464 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a207ffb6a17f606fde32eec5ae4222e2fdf02d79f61e4f959be55814c14daf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.252140 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.260569 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vx9xx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d13dbe6c-a57b-4011-9987-193ccf4939f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6637ecd7299bd61ba9b47f093048b122c90fc98990eb22155e8467c1dfd75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w7kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vx9xx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.270362 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23fd4be9-debc-405d-ac05-5a5160593231\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f18bb5a144ef41de921b7b92ee560b76e3f9f9a626be4d5638db46e4688d256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84373535efe637bbd5cf8649523941f0e73ad51f06dd45c913741e2a8e7b775f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090ad37f43c1a891b1a2922b2a367e1b52ce56fd6a53e9e749f819bf281136a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://436df1e343e5ca9347dd98b9da03c74816b0bc0ec6cecd94dc3b21c38eea9b51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babf66dd8ad93003e9766e885c7626d4f6dab5bc46a6d714449a01065d7afea9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"message\\\":\\\"W0217 13:45:14.176886 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 13:45:14.177326 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771335914 cert, and key in /tmp/serving-cert-2529326345/serving-signer.crt, /tmp/serving-cert-2529326345/serving-signer.key\\\\nI0217 13:45:14.372413 1 observer_polling.go:159] Starting file observer\\\\nW0217 13:45:14.378661 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 13:45:14.378888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:45:14.382093 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529326345/tls.crt::/tmp/serving-cert-2529326345/tls.key\\\\\\\"\\\\nF0217 13:45:14.604297 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0450fd6122e12877e07084f693f67e63ac70bcecc962f7da5a32241ba14553\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.287222 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e08f74a-75d7-4dce-9297-27140e91962a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a2013b70df34194f895ea7c2e5f2650966fa2d06518733fa4411893e5b2b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbc961a623ca1f4237f238856a41acfb724aadd5fa3feda70410c352b6f750c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80bdd44d3b326c9bb61f21b221c63d4647e2cc6edc95fe568b90e76f4329cf1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9bc54d1057546b9c8d7d1527bb2ae93c1c713b4d6e5879477157af90fc0d89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e88eaef317b2beb0c55811634e6752439b7c7725072f55a42f5ab49c2ac2385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.300127 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c1afc7-6071-464b-aedc-11234d869afe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e36c4174a69c22c608db515a3fe558ec0292c53ae56a43a945c8bb19bf4cdf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bb5bc7e6f182dd6271a6d16d8441e1f6833a32fd03858a3b91d16a896339c2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e477bf1f25d554946b3657a5bd18f922b18483902fd4f9c05a452a299cb4d920\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2260fe18b8f9f829f7dde85c9bdfc32895c8bfe48e0986c8f8205708e078ff37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.316779 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.435760 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-wxvlq"] Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.436361 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-nmzvl"] Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.436517 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-wlt4c"] Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.436584 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.436691 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-wxvlq" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.436840 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-wlt4c" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.438735 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.440228 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.440796 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.441153 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.441215 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.441233 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.441422 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.441543 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.441561 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.441960 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.442562 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.450272 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.468799 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.502878 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e5a2da13ff704ed5e89100bd3a88a6349d446c2af48f88acb399efb43dc42d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac53462f7a94ff04dc94b5065169da7ec43985a5a7e7dba779f23614b61b78fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.569666 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.592334 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.592436 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4eb74ba2-a87a-415f-8978-8ef706346aa3-tuning-conf-dir\") pod \"multus-additional-cni-plugins-wxvlq\" (UID: \"4eb74ba2-a87a-415f-8978-8ef706346aa3\") " pod="openshift-multus/multus-additional-cni-plugins-wxvlq" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.592456 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f4a1ca83-1919-4f9c-82de-c849cbd50e70-rootfs\") pod \"machine-config-daemon-nmzvl\" (UID: \"f4a1ca83-1919-4f9c-82de-c849cbd50e70\") " pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" Feb 17 13:45:31 crc kubenswrapper[4833]: E0217 13:45:31.592524 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:45:33.592503697 +0000 UTC m=+23.227603140 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.592580 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f4a1ca83-1919-4f9c-82de-c849cbd50e70-mcd-auth-proxy-config\") pod \"machine-config-daemon-nmzvl\" (UID: \"f4a1ca83-1919-4f9c-82de-c849cbd50e70\") " pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.592650 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a3b8d3ca-f768-4129-9c1a-b4866dd852d4-system-cni-dir\") pod \"multus-wlt4c\" (UID: \"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\") " pod="openshift-multus/multus-wlt4c" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.592678 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a3b8d3ca-f768-4129-9c1a-b4866dd852d4-host-run-netns\") pod \"multus-wlt4c\" (UID: \"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\") " pod="openshift-multus/multus-wlt4c" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.592700 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a3b8d3ca-f768-4129-9c1a-b4866dd852d4-host-var-lib-cni-multus\") pod \"multus-wlt4c\" (UID: \"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\") " pod="openshift-multus/multus-wlt4c" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.592723 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a3b8d3ca-f768-4129-9c1a-b4866dd852d4-cnibin\") pod \"multus-wlt4c\" (UID: \"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\") " pod="openshift-multus/multus-wlt4c" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.592744 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a3b8d3ca-f768-4129-9c1a-b4866dd852d4-etc-kubernetes\") pod \"multus-wlt4c\" (UID: \"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\") " pod="openshift-multus/multus-wlt4c" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.592776 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a3b8d3ca-f768-4129-9c1a-b4866dd852d4-host-var-lib-cni-bin\") pod \"multus-wlt4c\" (UID: \"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\") " pod="openshift-multus/multus-wlt4c" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.592806 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a3b8d3ca-f768-4129-9c1a-b4866dd852d4-hostroot\") pod \"multus-wlt4c\" (UID: \"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\") " pod="openshift-multus/multus-wlt4c" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.592828 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4eb74ba2-a87a-415f-8978-8ef706346aa3-cni-binary-copy\") pod \"multus-additional-cni-plugins-wxvlq\" (UID: \"4eb74ba2-a87a-415f-8978-8ef706346aa3\") " pod="openshift-multus/multus-additional-cni-plugins-wxvlq" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.592850 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a3b8d3ca-f768-4129-9c1a-b4866dd852d4-multus-socket-dir-parent\") pod \"multus-wlt4c\" (UID: \"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\") " pod="openshift-multus/multus-wlt4c" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.592871 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a3b8d3ca-f768-4129-9c1a-b4866dd852d4-host-var-lib-kubelet\") pod \"multus-wlt4c\" (UID: \"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\") " pod="openshift-multus/multus-wlt4c" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.592903 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f4a1ca83-1919-4f9c-82de-c849cbd50e70-proxy-tls\") pod \"machine-config-daemon-nmzvl\" (UID: \"f4a1ca83-1919-4f9c-82de-c849cbd50e70\") " pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.592923 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a3b8d3ca-f768-4129-9c1a-b4866dd852d4-multus-conf-dir\") pod \"multus-wlt4c\" (UID: \"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\") " pod="openshift-multus/multus-wlt4c" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.592945 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a3b8d3ca-f768-4129-9c1a-b4866dd852d4-host-run-multus-certs\") pod \"multus-wlt4c\" (UID: \"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\") " pod="openshift-multus/multus-wlt4c" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.592966 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a3b8d3ca-f768-4129-9c1a-b4866dd852d4-host-run-k8s-cni-cncf-io\") pod \"multus-wlt4c\" (UID: \"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\") " pod="openshift-multus/multus-wlt4c" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.592987 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a3b8d3ca-f768-4129-9c1a-b4866dd852d4-os-release\") pod \"multus-wlt4c\" (UID: \"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\") " pod="openshift-multus/multus-wlt4c" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.593006 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a3b8d3ca-f768-4129-9c1a-b4866dd852d4-cni-binary-copy\") pod \"multus-wlt4c\" (UID: \"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\") " pod="openshift-multus/multus-wlt4c" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.593032 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nq5ql\" (UniqueName: \"kubernetes.io/projected/f4a1ca83-1919-4f9c-82de-c849cbd50e70-kube-api-access-nq5ql\") pod \"machine-config-daemon-nmzvl\" (UID: \"f4a1ca83-1919-4f9c-82de-c849cbd50e70\") " pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.593084 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a3b8d3ca-f768-4129-9c1a-b4866dd852d4-multus-daemon-config\") pod \"multus-wlt4c\" (UID: \"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\") " pod="openshift-multus/multus-wlt4c" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.593116 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4eb74ba2-a87a-415f-8978-8ef706346aa3-system-cni-dir\") pod \"multus-additional-cni-plugins-wxvlq\" (UID: \"4eb74ba2-a87a-415f-8978-8ef706346aa3\") " pod="openshift-multus/multus-additional-cni-plugins-wxvlq" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.593138 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4eb74ba2-a87a-415f-8978-8ef706346aa3-cnibin\") pod \"multus-additional-cni-plugins-wxvlq\" (UID: \"4eb74ba2-a87a-415f-8978-8ef706346aa3\") " pod="openshift-multus/multus-additional-cni-plugins-wxvlq" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.593158 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4eb74ba2-a87a-415f-8978-8ef706346aa3-os-release\") pod \"multus-additional-cni-plugins-wxvlq\" (UID: \"4eb74ba2-a87a-415f-8978-8ef706346aa3\") " pod="openshift-multus/multus-additional-cni-plugins-wxvlq" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.593181 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24pg6\" (UniqueName: \"kubernetes.io/projected/4eb74ba2-a87a-415f-8978-8ef706346aa3-kube-api-access-24pg6\") pod \"multus-additional-cni-plugins-wxvlq\" (UID: \"4eb74ba2-a87a-415f-8978-8ef706346aa3\") " pod="openshift-multus/multus-additional-cni-plugins-wxvlq" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.593204 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a3b8d3ca-f768-4129-9c1a-b4866dd852d4-multus-cni-dir\") pod \"multus-wlt4c\" (UID: \"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\") " pod="openshift-multus/multus-wlt4c" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.593235 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4eb74ba2-a87a-415f-8978-8ef706346aa3-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-wxvlq\" (UID: \"4eb74ba2-a87a-415f-8978-8ef706346aa3\") " pod="openshift-multus/multus-additional-cni-plugins-wxvlq" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.593260 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9dxr\" (UniqueName: \"kubernetes.io/projected/a3b8d3ca-f768-4129-9c1a-b4866dd852d4-kube-api-access-t9dxr\") pod \"multus-wlt4c\" (UID: \"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\") " pod="openshift-multus/multus-wlt4c" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.595105 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c1afc7-6071-464b-aedc-11234d869afe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e36c4174a69c22c608db515a3fe558ec0292c53ae56a43a945c8bb19bf4cdf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bb5bc7e6f182dd6271a6d16d8441e1f6833a32fd03858a3b91d16a896339c2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e477bf1f25d554946b3657a5bd18f922b18483902fd4f9c05a452a299cb4d920\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2260fe18b8f9f829f7dde85c9bdfc32895c8bfe48e0986c8f8205708e078ff37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.647895 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a207ffb6a17f606fde32eec5ae4222e2fdf02d79f61e4f959be55814c14daf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.660569 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.693528 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a3b8d3ca-f768-4129-9c1a-b4866dd852d4-host-run-multus-certs\") pod \"multus-wlt4c\" (UID: \"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\") " pod="openshift-multus/multus-wlt4c" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.693593 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a3b8d3ca-f768-4129-9c1a-b4866dd852d4-host-run-k8s-cni-cncf-io\") pod \"multus-wlt4c\" (UID: \"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\") " pod="openshift-multus/multus-wlt4c" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.693625 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a3b8d3ca-f768-4129-9c1a-b4866dd852d4-os-release\") pod \"multus-wlt4c\" (UID: \"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\") " pod="openshift-multus/multus-wlt4c" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.693648 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a3b8d3ca-f768-4129-9c1a-b4866dd852d4-cni-binary-copy\") pod \"multus-wlt4c\" (UID: \"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\") " pod="openshift-multus/multus-wlt4c" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.693669 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nq5ql\" (UniqueName: \"kubernetes.io/projected/f4a1ca83-1919-4f9c-82de-c849cbd50e70-kube-api-access-nq5ql\") pod \"machine-config-daemon-nmzvl\" (UID: \"f4a1ca83-1919-4f9c-82de-c849cbd50e70\") " pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.693681 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a3b8d3ca-f768-4129-9c1a-b4866dd852d4-host-run-multus-certs\") pod \"multus-wlt4c\" (UID: \"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\") " pod="openshift-multus/multus-wlt4c" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.693742 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a3b8d3ca-f768-4129-9c1a-b4866dd852d4-host-run-k8s-cni-cncf-io\") pod \"multus-wlt4c\" (UID: \"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\") " pod="openshift-multus/multus-wlt4c" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.694108 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a3b8d3ca-f768-4129-9c1a-b4866dd852d4-os-release\") pod \"multus-wlt4c\" (UID: \"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\") " pod="openshift-multus/multus-wlt4c" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.694375 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.694403 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a3b8d3ca-f768-4129-9c1a-b4866dd852d4-multus-daemon-config\") pod \"multus-wlt4c\" (UID: \"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\") " pod="openshift-multus/multus-wlt4c" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.693689 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a3b8d3ca-f768-4129-9c1a-b4866dd852d4-multus-daemon-config\") pod \"multus-wlt4c\" (UID: \"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\") " pod="openshift-multus/multus-wlt4c" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.694572 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.694589 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a3b8d3ca-f768-4129-9c1a-b4866dd852d4-cni-binary-copy\") pod \"multus-wlt4c\" (UID: \"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\") " pod="openshift-multus/multus-wlt4c" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.694590 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4eb74ba2-a87a-415f-8978-8ef706346aa3-system-cni-dir\") pod \"multus-additional-cni-plugins-wxvlq\" (UID: \"4eb74ba2-a87a-415f-8978-8ef706346aa3\") " pod="openshift-multus/multus-additional-cni-plugins-wxvlq" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.694609 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4eb74ba2-a87a-415f-8978-8ef706346aa3-system-cni-dir\") pod \"multus-additional-cni-plugins-wxvlq\" (UID: \"4eb74ba2-a87a-415f-8978-8ef706346aa3\") " pod="openshift-multus/multus-additional-cni-plugins-wxvlq" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.694627 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4eb74ba2-a87a-415f-8978-8ef706346aa3-cnibin\") pod \"multus-additional-cni-plugins-wxvlq\" (UID: \"4eb74ba2-a87a-415f-8978-8ef706346aa3\") " pod="openshift-multus/multus-additional-cni-plugins-wxvlq" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.694644 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4eb74ba2-a87a-415f-8978-8ef706346aa3-os-release\") pod \"multus-additional-cni-plugins-wxvlq\" (UID: \"4eb74ba2-a87a-415f-8978-8ef706346aa3\") " pod="openshift-multus/multus-additional-cni-plugins-wxvlq" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.694660 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24pg6\" (UniqueName: \"kubernetes.io/projected/4eb74ba2-a87a-415f-8978-8ef706346aa3-kube-api-access-24pg6\") pod \"multus-additional-cni-plugins-wxvlq\" (UID: \"4eb74ba2-a87a-415f-8978-8ef706346aa3\") " pod="openshift-multus/multus-additional-cni-plugins-wxvlq" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.694691 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a3b8d3ca-f768-4129-9c1a-b4866dd852d4-multus-cni-dir\") pod \"multus-wlt4c\" (UID: \"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\") " pod="openshift-multus/multus-wlt4c" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.694701 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4eb74ba2-a87a-415f-8978-8ef706346aa3-os-release\") pod \"multus-additional-cni-plugins-wxvlq\" (UID: \"4eb74ba2-a87a-415f-8978-8ef706346aa3\") " pod="openshift-multus/multus-additional-cni-plugins-wxvlq" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.694710 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.694725 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4eb74ba2-a87a-415f-8978-8ef706346aa3-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-wxvlq\" (UID: \"4eb74ba2-a87a-415f-8978-8ef706346aa3\") " pod="openshift-multus/multus-additional-cni-plugins-wxvlq" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.694741 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9dxr\" (UniqueName: \"kubernetes.io/projected/a3b8d3ca-f768-4129-9c1a-b4866dd852d4-kube-api-access-t9dxr\") pod \"multus-wlt4c\" (UID: \"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\") " pod="openshift-multus/multus-wlt4c" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.694756 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4eb74ba2-a87a-415f-8978-8ef706346aa3-tuning-conf-dir\") pod \"multus-additional-cni-plugins-wxvlq\" (UID: \"4eb74ba2-a87a-415f-8978-8ef706346aa3\") " pod="openshift-multus/multus-additional-cni-plugins-wxvlq" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.694775 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f4a1ca83-1919-4f9c-82de-c849cbd50e70-rootfs\") pod \"machine-config-daemon-nmzvl\" (UID: \"f4a1ca83-1919-4f9c-82de-c849cbd50e70\") " pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.694789 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f4a1ca83-1919-4f9c-82de-c849cbd50e70-mcd-auth-proxy-config\") pod \"machine-config-daemon-nmzvl\" (UID: \"f4a1ca83-1919-4f9c-82de-c849cbd50e70\") " pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.694803 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a3b8d3ca-f768-4129-9c1a-b4866dd852d4-system-cni-dir\") pod \"multus-wlt4c\" (UID: \"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\") " pod="openshift-multus/multus-wlt4c" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.694816 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a3b8d3ca-f768-4129-9c1a-b4866dd852d4-multus-cni-dir\") pod \"multus-wlt4c\" (UID: \"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\") " pod="openshift-multus/multus-wlt4c" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.694838 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a3b8d3ca-f768-4129-9c1a-b4866dd852d4-host-run-netns\") pod \"multus-wlt4c\" (UID: \"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\") " pod="openshift-multus/multus-wlt4c" Feb 17 13:45:31 crc kubenswrapper[4833]: E0217 13:45:31.694661 4833 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 13:45:31 crc kubenswrapper[4833]: E0217 13:45:31.694940 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 13:45:33.694929028 +0000 UTC m=+23.330028461 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.694725 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4eb74ba2-a87a-415f-8978-8ef706346aa3-cnibin\") pod \"multus-additional-cni-plugins-wxvlq\" (UID: \"4eb74ba2-a87a-415f-8978-8ef706346aa3\") " pod="openshift-multus/multus-additional-cni-plugins-wxvlq" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.695120 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f4a1ca83-1919-4f9c-82de-c849cbd50e70-rootfs\") pod \"machine-config-daemon-nmzvl\" (UID: \"f4a1ca83-1919-4f9c-82de-c849cbd50e70\") " pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.695084 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a3b8d3ca-f768-4129-9c1a-b4866dd852d4-system-cni-dir\") pod \"multus-wlt4c\" (UID: \"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\") " pod="openshift-multus/multus-wlt4c" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.694817 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a3b8d3ca-f768-4129-9c1a-b4866dd852d4-host-run-netns\") pod \"multus-wlt4c\" (UID: \"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\") " pod="openshift-multus/multus-wlt4c" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.695197 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a3b8d3ca-f768-4129-9c1a-b4866dd852d4-host-var-lib-cni-multus\") pod \"multus-wlt4c\" (UID: \"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\") " pod="openshift-multus/multus-wlt4c" Feb 17 13:45:31 crc kubenswrapper[4833]: E0217 13:45:31.694851 4833 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.695250 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a3b8d3ca-f768-4129-9c1a-b4866dd852d4-cnibin\") pod \"multus-wlt4c\" (UID: \"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\") " pod="openshift-multus/multus-wlt4c" Feb 17 13:45:31 crc kubenswrapper[4833]: E0217 13:45:31.695266 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 13:45:33.695249935 +0000 UTC m=+23.330349448 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.695223 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a3b8d3ca-f768-4129-9c1a-b4866dd852d4-cnibin\") pod \"multus-wlt4c\" (UID: \"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\") " pod="openshift-multus/multus-wlt4c" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.695280 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a3b8d3ca-f768-4129-9c1a-b4866dd852d4-host-var-lib-cni-multus\") pod \"multus-wlt4c\" (UID: \"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\") " pod="openshift-multus/multus-wlt4c" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.695304 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a3b8d3ca-f768-4129-9c1a-b4866dd852d4-etc-kubernetes\") pod \"multus-wlt4c\" (UID: \"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\") " pod="openshift-multus/multus-wlt4c" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.695335 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.695341 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4eb74ba2-a87a-415f-8978-8ef706346aa3-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-wxvlq\" (UID: \"4eb74ba2-a87a-415f-8978-8ef706346aa3\") " pod="openshift-multus/multus-additional-cni-plugins-wxvlq" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.695359 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a3b8d3ca-f768-4129-9c1a-b4866dd852d4-host-var-lib-cni-bin\") pod \"multus-wlt4c\" (UID: \"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\") " pod="openshift-multus/multus-wlt4c" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.695375 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a3b8d3ca-f768-4129-9c1a-b4866dd852d4-etc-kubernetes\") pod \"multus-wlt4c\" (UID: \"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\") " pod="openshift-multus/multus-wlt4c" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.695383 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a3b8d3ca-f768-4129-9c1a-b4866dd852d4-hostroot\") pod \"multus-wlt4c\" (UID: \"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\") " pod="openshift-multus/multus-wlt4c" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.695401 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a3b8d3ca-f768-4129-9c1a-b4866dd852d4-host-var-lib-cni-bin\") pod \"multus-wlt4c\" (UID: \"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\") " pod="openshift-multus/multus-wlt4c" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.695407 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4eb74ba2-a87a-415f-8978-8ef706346aa3-cni-binary-copy\") pod \"multus-additional-cni-plugins-wxvlq\" (UID: \"4eb74ba2-a87a-415f-8978-8ef706346aa3\") " pod="openshift-multus/multus-additional-cni-plugins-wxvlq" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.695425 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a3b8d3ca-f768-4129-9c1a-b4866dd852d4-hostroot\") pod \"multus-wlt4c\" (UID: \"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\") " pod="openshift-multus/multus-wlt4c" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.695427 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a3b8d3ca-f768-4129-9c1a-b4866dd852d4-multus-socket-dir-parent\") pod \"multus-wlt4c\" (UID: \"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\") " pod="openshift-multus/multus-wlt4c" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.695448 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a3b8d3ca-f768-4129-9c1a-b4866dd852d4-host-var-lib-kubelet\") pod \"multus-wlt4c\" (UID: \"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\") " pod="openshift-multus/multus-wlt4c" Feb 17 13:45:31 crc kubenswrapper[4833]: E0217 13:45:31.695467 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.695477 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.695492 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4eb74ba2-a87a-415f-8978-8ef706346aa3-tuning-conf-dir\") pod \"multus-additional-cni-plugins-wxvlq\" (UID: \"4eb74ba2-a87a-415f-8978-8ef706346aa3\") " pod="openshift-multus/multus-additional-cni-plugins-wxvlq" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.695503 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f4a1ca83-1919-4f9c-82de-c849cbd50e70-proxy-tls\") pod \"machine-config-daemon-nmzvl\" (UID: \"f4a1ca83-1919-4f9c-82de-c849cbd50e70\") " pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.695515 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a3b8d3ca-f768-4129-9c1a-b4866dd852d4-multus-socket-dir-parent\") pod \"multus-wlt4c\" (UID: \"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\") " pod="openshift-multus/multus-wlt4c" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.695535 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a3b8d3ca-f768-4129-9c1a-b4866dd852d4-multus-conf-dir\") pod \"multus-wlt4c\" (UID: \"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\") " pod="openshift-multus/multus-wlt4c" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.695548 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f4a1ca83-1919-4f9c-82de-c849cbd50e70-mcd-auth-proxy-config\") pod \"machine-config-daemon-nmzvl\" (UID: \"f4a1ca83-1919-4f9c-82de-c849cbd50e70\") " pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" Feb 17 13:45:31 crc kubenswrapper[4833]: E0217 13:45:31.695484 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.695586 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a3b8d3ca-f768-4129-9c1a-b4866dd852d4-host-var-lib-kubelet\") pod \"multus-wlt4c\" (UID: \"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\") " pod="openshift-multus/multus-wlt4c" Feb 17 13:45:31 crc kubenswrapper[4833]: E0217 13:45:31.695595 4833 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 13:45:31 crc kubenswrapper[4833]: E0217 13:45:31.695569 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 13:45:31 crc kubenswrapper[4833]: E0217 13:45:31.695612 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 13:45:31 crc kubenswrapper[4833]: E0217 13:45:31.695622 4833 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 13:45:31 crc kubenswrapper[4833]: E0217 13:45:31.695634 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 13:45:33.695622754 +0000 UTC m=+23.330722237 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.695587 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a3b8d3ca-f768-4129-9c1a-b4866dd852d4-multus-conf-dir\") pod \"multus-wlt4c\" (UID: \"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\") " pod="openshift-multus/multus-wlt4c" Feb 17 13:45:31 crc kubenswrapper[4833]: E0217 13:45:31.695660 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 13:45:33.695644525 +0000 UTC m=+23.330743958 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.695834 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4eb74ba2-a87a-415f-8978-8ef706346aa3-cni-binary-copy\") pod \"multus-additional-cni-plugins-wxvlq\" (UID: \"4eb74ba2-a87a-415f-8978-8ef706346aa3\") " pod="openshift-multus/multus-additional-cni-plugins-wxvlq" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.699379 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f4a1ca83-1919-4f9c-82de-c849cbd50e70-proxy-tls\") pod \"machine-config-daemon-nmzvl\" (UID: \"f4a1ca83-1919-4f9c-82de-c849cbd50e70\") " pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.719823 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vx9xx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d13dbe6c-a57b-4011-9987-193ccf4939f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6637ecd7299bd61ba9b47f093048b122c90fc98990eb22155e8467c1dfd75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w7kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vx9xx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.751742 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nq5ql\" (UniqueName: \"kubernetes.io/projected/f4a1ca83-1919-4f9c-82de-c849cbd50e70-kube-api-access-nq5ql\") pod \"machine-config-daemon-nmzvl\" (UID: \"f4a1ca83-1919-4f9c-82de-c849cbd50e70\") " pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.766739 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24pg6\" (UniqueName: \"kubernetes.io/projected/4eb74ba2-a87a-415f-8978-8ef706346aa3-kube-api-access-24pg6\") pod \"multus-additional-cni-plugins-wxvlq\" (UID: \"4eb74ba2-a87a-415f-8978-8ef706346aa3\") " pod="openshift-multus/multus-additional-cni-plugins-wxvlq" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.768441 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-wxvlq" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.790618 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9dxr\" (UniqueName: \"kubernetes.io/projected/a3b8d3ca-f768-4129-9c1a-b4866dd852d4-kube-api-access-t9dxr\") pod \"multus-wlt4c\" (UID: \"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\") " pod="openshift-multus/multus-wlt4c" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.806759 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-7r9gt"] Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.807675 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.823915 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23fd4be9-debc-405d-ac05-5a5160593231\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f18bb5a144ef41de921b7b92ee560b76e3f9f9a626be4d5638db46e4688d256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84373535efe637bbd5cf8649523941f0e73ad51f06dd45c913741e2a8e7b775f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090ad37f43c1a891b1a2922b2a367e1b52ce56fd6a53e9e749f819bf281136a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://436df1e343e5ca9347dd98b9da03c74816b0bc0ec6cecd94dc3b21c38eea9b51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babf66dd8ad93003e9766e885c7626d4f6dab5bc46a6d714449a01065d7afea9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"message\\\":\\\"W0217 13:45:14.176886 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 13:45:14.177326 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771335914 cert, and key in /tmp/serving-cert-2529326345/serving-signer.crt, /tmp/serving-cert-2529326345/serving-signer.key\\\\nI0217 13:45:14.372413 1 observer_polling.go:159] Starting file observer\\\\nW0217 13:45:14.378661 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 13:45:14.378888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:45:14.382093 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529326345/tls.crt::/tmp/serving-cert-2529326345/tls.key\\\\\\\"\\\\nF0217 13:45:14.604297 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0450fd6122e12877e07084f693f67e63ac70bcecc962f7da5a32241ba14553\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.834150 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.853728 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.873981 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.895067 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.914425 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.933882 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.955097 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.990572 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 13:47:13.695863571 +0000 UTC Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.995231 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.998697 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-host-run-netns\") pod \"ovnkube-node-7r9gt\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.998736 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-run-ovn\") pod \"ovnkube-node-7r9gt\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.998756 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-log-socket\") pod \"ovnkube-node-7r9gt\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.998779 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-host-slash\") pod \"ovnkube-node-7r9gt\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.998800 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-etc-openvswitch\") pod \"ovnkube-node-7r9gt\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.998858 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7r9gt\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.998909 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxg4p\" (UniqueName: \"kubernetes.io/projected/72c5918a-056f-446c-b138-a1be7140a5b0-kube-api-access-wxg4p\") pod \"ovnkube-node-7r9gt\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.998951 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-var-lib-openvswitch\") pod \"ovnkube-node-7r9gt\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.998970 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/72c5918a-056f-446c-b138-a1be7140a5b0-env-overrides\") pod \"ovnkube-node-7r9gt\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.998998 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-systemd-units\") pod \"ovnkube-node-7r9gt\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.999027 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/72c5918a-056f-446c-b138-a1be7140a5b0-ovn-node-metrics-cert\") pod \"ovnkube-node-7r9gt\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.999060 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-node-log\") pod \"ovnkube-node-7r9gt\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.999077 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/72c5918a-056f-446c-b138-a1be7140a5b0-ovnkube-config\") pod \"ovnkube-node-7r9gt\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.999095 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-run-openvswitch\") pod \"ovnkube-node-7r9gt\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.999109 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-run-systemd\") pod \"ovnkube-node-7r9gt\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.999124 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-host-cni-netd\") pod \"ovnkube-node-7r9gt\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.999146 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-host-kubelet\") pod \"ovnkube-node-7r9gt\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.999161 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-host-cni-bin\") pod \"ovnkube-node-7r9gt\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.999192 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-host-run-ovn-kubernetes\") pod \"ovnkube-node-7r9gt\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" Feb 17 13:45:31 crc kubenswrapper[4833]: I0217 13:45:31.999209 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/72c5918a-056f-446c-b138-a1be7140a5b0-ovnkube-script-lib\") pod \"ovnkube-node-7r9gt\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.014089 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.034775 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.041466 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.041508 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:45:32 crc kubenswrapper[4833]: E0217 13:45:32.041605 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:45:32 crc kubenswrapper[4833]: E0217 13:45:32.041703 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.049756 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.058741 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-wlt4c" Feb 17 13:45:32 crc kubenswrapper[4833]: W0217 13:45:32.059114 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4a1ca83_1919_4f9c_82de_c849cbd50e70.slice/crio-457f46830b8dc15d1858b061c48a961e728bd7a97c39a0c01867ade9d2a0e9d6 WatchSource:0}: Error finding container 457f46830b8dc15d1858b061c48a961e728bd7a97c39a0c01867ade9d2a0e9d6: Status 404 returned error can't find the container with id 457f46830b8dc15d1858b061c48a961e728bd7a97c39a0c01867ade9d2a0e9d6 Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.076057 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e08f74a-75d7-4dce-9297-27140e91962a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a2013b70df34194f895ea7c2e5f2650966fa2d06518733fa4411893e5b2b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbc961a623ca1f4237f238856a41acfb724aadd5fa3feda70410c352b6f750c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80bdd44d3b326c9bb61f21b221c63d4647e2cc6edc95fe568b90e76f4329cf1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9bc54d1057546b9c8d7d1527bb2ae93c1c713b4d6e5879477157af90fc0d89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e88eaef317b2beb0c55811634e6752439b7c7725072f55a42f5ab49c2ac2385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:32Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.100023 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/72c5918a-056f-446c-b138-a1be7140a5b0-ovn-node-metrics-cert\") pod \"ovnkube-node-7r9gt\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.100083 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-run-openvswitch\") pod \"ovnkube-node-7r9gt\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.100097 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-node-log\") pod \"ovnkube-node-7r9gt\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.100111 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/72c5918a-056f-446c-b138-a1be7140a5b0-ovnkube-config\") pod \"ovnkube-node-7r9gt\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.100126 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-run-systemd\") pod \"ovnkube-node-7r9gt\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.100140 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-host-cni-netd\") pod \"ovnkube-node-7r9gt\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.100161 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-host-kubelet\") pod \"ovnkube-node-7r9gt\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.100175 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-host-cni-bin\") pod \"ovnkube-node-7r9gt\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.100191 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-host-run-ovn-kubernetes\") pod \"ovnkube-node-7r9gt\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.100205 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/72c5918a-056f-446c-b138-a1be7140a5b0-ovnkube-script-lib\") pod \"ovnkube-node-7r9gt\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.100231 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-run-ovn\") pod \"ovnkube-node-7r9gt\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.100244 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-host-run-netns\") pod \"ovnkube-node-7r9gt\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.100258 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-log-socket\") pod \"ovnkube-node-7r9gt\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.100273 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-host-slash\") pod \"ovnkube-node-7r9gt\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.100289 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-etc-openvswitch\") pod \"ovnkube-node-7r9gt\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.100302 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7r9gt\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.100317 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxg4p\" (UniqueName: \"kubernetes.io/projected/72c5918a-056f-446c-b138-a1be7140a5b0-kube-api-access-wxg4p\") pod \"ovnkube-node-7r9gt\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.100334 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-systemd-units\") pod \"ovnkube-node-7r9gt\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.100348 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-var-lib-openvswitch\") pod \"ovnkube-node-7r9gt\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.100361 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/72c5918a-056f-446c-b138-a1be7140a5b0-env-overrides\") pod \"ovnkube-node-7r9gt\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.100794 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/72c5918a-056f-446c-b138-a1be7140a5b0-env-overrides\") pod \"ovnkube-node-7r9gt\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.100846 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-run-openvswitch\") pod \"ovnkube-node-7r9gt\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.100877 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-node-log\") pod \"ovnkube-node-7r9gt\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.101358 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/72c5918a-056f-446c-b138-a1be7140a5b0-ovnkube-config\") pod \"ovnkube-node-7r9gt\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.101402 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-run-systemd\") pod \"ovnkube-node-7r9gt\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.101438 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-host-cni-netd\") pod \"ovnkube-node-7r9gt\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.101479 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-host-kubelet\") pod \"ovnkube-node-7r9gt\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.101506 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-host-cni-bin\") pod \"ovnkube-node-7r9gt\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.101544 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-host-run-ovn-kubernetes\") pod \"ovnkube-node-7r9gt\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.101970 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/72c5918a-056f-446c-b138-a1be7140a5b0-ovnkube-script-lib\") pod \"ovnkube-node-7r9gt\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.102001 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-run-ovn\") pod \"ovnkube-node-7r9gt\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.102021 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-host-run-netns\") pod \"ovnkube-node-7r9gt\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.102083 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-log-socket\") pod \"ovnkube-node-7r9gt\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.102106 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-host-slash\") pod \"ovnkube-node-7r9gt\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.102124 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-etc-openvswitch\") pod \"ovnkube-node-7r9gt\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.102142 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7r9gt\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.102252 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-systemd-units\") pod \"ovnkube-node-7r9gt\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.102276 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-var-lib-openvswitch\") pod \"ovnkube-node-7r9gt\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.104691 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:32Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.107788 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/72c5918a-056f-446c-b138-a1be7140a5b0-ovn-node-metrics-cert\") pod \"ovnkube-node-7r9gt\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.115701 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.147770 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxg4p\" (UniqueName: \"kubernetes.io/projected/72c5918a-056f-446c-b138-a1be7140a5b0-kube-api-access-wxg4p\") pod \"ovnkube-node-7r9gt\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.149086 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wxvlq" event={"ID":"4eb74ba2-a87a-415f-8978-8ef706346aa3","Type":"ContainerStarted","Data":"a2569baf4273e04fbca0b38e0a65c945ba4b2767cba6f2b6647012328c041bab"} Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.149123 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wxvlq" event={"ID":"4eb74ba2-a87a-415f-8978-8ef706346aa3","Type":"ContainerStarted","Data":"e1a29b71ea21ca312cf09313b11744cb4547b77fd0e7ebf1554c7a6e9816ff83"} Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.150926 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-wlt4c" event={"ID":"a3b8d3ca-f768-4129-9c1a-b4866dd852d4","Type":"ContainerStarted","Data":"751184f2ae2978e13538e06fe040bb83360ac9a88e2d0578a196af659ca96d62"} Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.151756 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" event={"ID":"f4a1ca83-1919-4f9c-82de-c849cbd50e70","Type":"ContainerStarted","Data":"457f46830b8dc15d1858b061c48a961e728bd7a97c39a0c01867ade9d2a0e9d6"} Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.174305 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.193913 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.222820 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4a1ca83-1919-4f9c-82de-c849cbd50e70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nq5ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nq5ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nmzvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:32Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.234439 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.254080 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.301693 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:32Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.314558 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.334445 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.374376 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.403503 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vx9xx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d13dbe6c-a57b-4011-9987-193ccf4939f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6637ecd7299bd61ba9b47f093048b122c90fc98990eb22155e8467c1dfd75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w7kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vx9xx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:32Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.420339 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" Feb 17 13:45:32 crc kubenswrapper[4833]: W0217 13:45:32.437685 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod72c5918a_056f_446c_b138_a1be7140a5b0.slice/crio-a4b7843bd85b06f5b3e21637cd2e22f0854ff282374189fb9dc7a310009fcdf6 WatchSource:0}: Error finding container a4b7843bd85b06f5b3e21637cd2e22f0854ff282374189fb9dc7a310009fcdf6: Status 404 returned error can't find the container with id a4b7843bd85b06f5b3e21637cd2e22f0854ff282374189fb9dc7a310009fcdf6 Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.443751 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4a1ca83-1919-4f9c-82de-c849cbd50e70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nq5ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nq5ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nmzvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:32Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.482592 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wxvlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eb74ba2-a87a-415f-8978-8ef706346aa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2569baf4273e04fbca0b38e0a65c945ba4b2767cba6f2b6647012328c041bab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wxvlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:32Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.523453 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c1afc7-6071-464b-aedc-11234d869afe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e36c4174a69c22c608db515a3fe558ec0292c53ae56a43a945c8bb19bf4cdf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bb5bc7e6f182dd6271a6d16d8441e1f6833a32fd03858a3b91d16a896339c2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e477bf1f25d554946b3657a5bd18f922b18483902fd4f9c05a452a299cb4d920\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2260fe18b8f9f829f7dde85c9bdfc32895c8bfe48e0986c8f8205708e078ff37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:32Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.562991 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a207ffb6a17f606fde32eec5ae4222e2fdf02d79f61e4f959be55814c14daf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:32Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.602981 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:32Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.642616 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e5a2da13ff704ed5e89100bd3a88a6349d446c2af48f88acb399efb43dc42d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac53462f7a94ff04dc94b5065169da7ec43985a5a7e7dba779f23614b61b78fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:32Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.682730 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:32Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.735141 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:32Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.775520 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wlt4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9dxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wlt4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:32Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.807483 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72c5918a-056f-446c-b138-a1be7140a5b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7r9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:32Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.843282 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23fd4be9-debc-405d-ac05-5a5160593231\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f18bb5a144ef41de921b7b92ee560b76e3f9f9a626be4d5638db46e4688d256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84373535efe637bbd5cf8649523941f0e73ad51f06dd45c913741e2a8e7b775f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090ad37f43c1a891b1a2922b2a367e1b52ce56fd6a53e9e749f819bf281136a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://436df1e343e5ca9347dd98b9da03c74816b0bc0ec6cecd94dc3b21c38eea9b51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babf66dd8ad93003e9766e885c7626d4f6dab5bc46a6d714449a01065d7afea9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"message\\\":\\\"W0217 13:45:14.176886 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 13:45:14.177326 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771335914 cert, and key in /tmp/serving-cert-2529326345/serving-signer.crt, /tmp/serving-cert-2529326345/serving-signer.key\\\\nI0217 13:45:14.372413 1 observer_polling.go:159] Starting file observer\\\\nW0217 13:45:14.378661 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 13:45:14.378888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:45:14.382093 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529326345/tls.crt::/tmp/serving-cert-2529326345/tls.key\\\\\\\"\\\\nF0217 13:45:14.604297 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0450fd6122e12877e07084f693f67e63ac70bcecc962f7da5a32241ba14553\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:32Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.888558 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e08f74a-75d7-4dce-9297-27140e91962a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a2013b70df34194f895ea7c2e5f2650966fa2d06518733fa4411893e5b2b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbc961a623ca1f4237f238856a41acfb724aadd5fa3feda70410c352b6f750c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80bdd44d3b326c9bb61f21b221c63d4647e2cc6edc95fe568b90e76f4329cf1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9bc54d1057546b9c8d7d1527bb2ae93c1c713b4d6e5879477157af90fc0d89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e88eaef317b2beb0c55811634e6752439b7c7725072f55a42f5ab49c2ac2385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:32Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.895647 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-5gxjd"] Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.895963 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-5gxjd" Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.914726 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.934208 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.955470 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.974241 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 17 13:45:32 crc kubenswrapper[4833]: I0217 13:45:32.991473 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 12:43:49.780226533 +0000 UTC Feb 17 13:45:33 crc kubenswrapper[4833]: I0217 13:45:33.000946 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e5a2da13ff704ed5e89100bd3a88a6349d446c2af48f88acb399efb43dc42d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac53462f7a94ff04dc94b5065169da7ec43985a5a7e7dba779f23614b61b78fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:32Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:33 crc kubenswrapper[4833]: I0217 13:45:33.008411 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6s6vd\" (UniqueName: \"kubernetes.io/projected/03c9f579-21ad-4977-a5e8-db9272a08557-kube-api-access-6s6vd\") pod \"node-ca-5gxjd\" (UID: \"03c9f579-21ad-4977-a5e8-db9272a08557\") " pod="openshift-image-registry/node-ca-5gxjd" Feb 17 13:45:33 crc kubenswrapper[4833]: I0217 13:45:33.008464 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/03c9f579-21ad-4977-a5e8-db9272a08557-serviceca\") pod \"node-ca-5gxjd\" (UID: \"03c9f579-21ad-4977-a5e8-db9272a08557\") " pod="openshift-image-registry/node-ca-5gxjd" Feb 17 13:45:33 crc kubenswrapper[4833]: I0217 13:45:33.008492 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/03c9f579-21ad-4977-a5e8-db9272a08557-host\") pod \"node-ca-5gxjd\" (UID: \"03c9f579-21ad-4977-a5e8-db9272a08557\") " pod="openshift-image-registry/node-ca-5gxjd" Feb 17 13:45:33 crc kubenswrapper[4833]: I0217 13:45:33.041498 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:45:33 crc kubenswrapper[4833]: E0217 13:45:33.041624 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:45:33 crc kubenswrapper[4833]: I0217 13:45:33.042480 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:33Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:33 crc kubenswrapper[4833]: I0217 13:45:33.080772 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:33Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:33 crc kubenswrapper[4833]: I0217 13:45:33.109640 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6s6vd\" (UniqueName: \"kubernetes.io/projected/03c9f579-21ad-4977-a5e8-db9272a08557-kube-api-access-6s6vd\") pod \"node-ca-5gxjd\" (UID: \"03c9f579-21ad-4977-a5e8-db9272a08557\") " pod="openshift-image-registry/node-ca-5gxjd" Feb 17 13:45:33 crc kubenswrapper[4833]: I0217 13:45:33.109687 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/03c9f579-21ad-4977-a5e8-db9272a08557-host\") pod \"node-ca-5gxjd\" (UID: \"03c9f579-21ad-4977-a5e8-db9272a08557\") " pod="openshift-image-registry/node-ca-5gxjd" Feb 17 13:45:33 crc kubenswrapper[4833]: I0217 13:45:33.109702 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/03c9f579-21ad-4977-a5e8-db9272a08557-serviceca\") pod \"node-ca-5gxjd\" (UID: \"03c9f579-21ad-4977-a5e8-db9272a08557\") " pod="openshift-image-registry/node-ca-5gxjd" Feb 17 13:45:33 crc kubenswrapper[4833]: I0217 13:45:33.109794 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/03c9f579-21ad-4977-a5e8-db9272a08557-host\") pod \"node-ca-5gxjd\" (UID: \"03c9f579-21ad-4977-a5e8-db9272a08557\") " pod="openshift-image-registry/node-ca-5gxjd" Feb 17 13:45:33 crc kubenswrapper[4833]: I0217 13:45:33.110749 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/03c9f579-21ad-4977-a5e8-db9272a08557-serviceca\") pod \"node-ca-5gxjd\" (UID: \"03c9f579-21ad-4977-a5e8-db9272a08557\") " pod="openshift-image-registry/node-ca-5gxjd" Feb 17 13:45:33 crc kubenswrapper[4833]: I0217 13:45:33.127641 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e08f74a-75d7-4dce-9297-27140e91962a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a2013b70df34194f895ea7c2e5f2650966fa2d06518733fa4411893e5b2b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbc961a623ca1f4237f238856a41acfb724aadd5fa3feda70410c352b6f750c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80bdd44d3b326c9bb61f21b221c63d4647e2cc6edc95fe568b90e76f4329cf1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9bc54d1057546b9c8d7d1527bb2ae93c1c713b4d6e5879477157af90fc0d89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e88eaef317b2beb0c55811634e6752439b7c7725072f55a42f5ab49c2ac2385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:33Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:33 crc kubenswrapper[4833]: I0217 13:45:33.147523 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6s6vd\" (UniqueName: \"kubernetes.io/projected/03c9f579-21ad-4977-a5e8-db9272a08557-kube-api-access-6s6vd\") pod \"node-ca-5gxjd\" (UID: \"03c9f579-21ad-4977-a5e8-db9272a08557\") " pod="openshift-image-registry/node-ca-5gxjd" Feb 17 13:45:33 crc kubenswrapper[4833]: I0217 13:45:33.156352 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-wlt4c" event={"ID":"a3b8d3ca-f768-4129-9c1a-b4866dd852d4","Type":"ContainerStarted","Data":"26d2bf39d36b6e8998e442f36f84198a363d7f22f56a05957a4a242459e6ff8b"} Feb 17 13:45:33 crc kubenswrapper[4833]: I0217 13:45:33.157749 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"4b9b39aa9ac4f718b75bfeffba8e09bf2fefca68625037f565f3b237b6275db8"} Feb 17 13:45:33 crc kubenswrapper[4833]: I0217 13:45:33.159160 4833 generic.go:334] "Generic (PLEG): container finished" podID="4eb74ba2-a87a-415f-8978-8ef706346aa3" containerID="a2569baf4273e04fbca0b38e0a65c945ba4b2767cba6f2b6647012328c041bab" exitCode=0 Feb 17 13:45:33 crc kubenswrapper[4833]: I0217 13:45:33.159229 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wxvlq" event={"ID":"4eb74ba2-a87a-415f-8978-8ef706346aa3","Type":"ContainerDied","Data":"a2569baf4273e04fbca0b38e0a65c945ba4b2767cba6f2b6647012328c041bab"} Feb 17 13:45:33 crc kubenswrapper[4833]: I0217 13:45:33.160289 4833 generic.go:334] "Generic (PLEG): container finished" podID="72c5918a-056f-446c-b138-a1be7140a5b0" containerID="924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83" exitCode=0 Feb 17 13:45:33 crc kubenswrapper[4833]: I0217 13:45:33.160327 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" event={"ID":"72c5918a-056f-446c-b138-a1be7140a5b0","Type":"ContainerDied","Data":"924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83"} Feb 17 13:45:33 crc kubenswrapper[4833]: I0217 13:45:33.160355 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" event={"ID":"72c5918a-056f-446c-b138-a1be7140a5b0","Type":"ContainerStarted","Data":"a4b7843bd85b06f5b3e21637cd2e22f0854ff282374189fb9dc7a310009fcdf6"} Feb 17 13:45:33 crc kubenswrapper[4833]: I0217 13:45:33.162620 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" event={"ID":"f4a1ca83-1919-4f9c-82de-c849cbd50e70","Type":"ContainerStarted","Data":"b4ea9f4f4051db850a31f5e1081095664c8764e1033e81823e75cd0072d13a0e"} Feb 17 13:45:33 crc kubenswrapper[4833]: I0217 13:45:33.162664 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" event={"ID":"f4a1ca83-1919-4f9c-82de-c849cbd50e70","Type":"ContainerStarted","Data":"89a1061eccd396579d2995d2d4ee3bd83f65f7e06a951b4d77897e687a7dcb78"} Feb 17 13:45:33 crc kubenswrapper[4833]: I0217 13:45:33.184319 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:33Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:33 crc kubenswrapper[4833]: I0217 13:45:33.210329 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-5gxjd" Feb 17 13:45:33 crc kubenswrapper[4833]: I0217 13:45:33.224532 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wlt4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9dxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wlt4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:33Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:33 crc kubenswrapper[4833]: W0217 13:45:33.239396 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03c9f579_21ad_4977_a5e8_db9272a08557.slice/crio-2fb78e6870f6a8113b993c701e3b411ebe8001eed0b0814d32356167a775c336 WatchSource:0}: Error finding container 2fb78e6870f6a8113b993c701e3b411ebe8001eed0b0814d32356167a775c336: Status 404 returned error can't find the container with id 2fb78e6870f6a8113b993c701e3b411ebe8001eed0b0814d32356167a775c336 Feb 17 13:45:33 crc kubenswrapper[4833]: I0217 13:45:33.266486 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72c5918a-056f-446c-b138-a1be7140a5b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7r9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:33Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:33 crc kubenswrapper[4833]: I0217 13:45:33.303483 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23fd4be9-debc-405d-ac05-5a5160593231\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f18bb5a144ef41de921b7b92ee560b76e3f9f9a626be4d5638db46e4688d256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84373535efe637bbd5cf8649523941f0e73ad51f06dd45c913741e2a8e7b775f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090ad37f43c1a891b1a2922b2a367e1b52ce56fd6a53e9e749f819bf281136a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://436df1e343e5ca9347dd98b9da03c74816b0bc0ec6cecd94dc3b21c38eea9b51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babf66dd8ad93003e9766e885c7626d4f6dab5bc46a6d714449a01065d7afea9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"message\\\":\\\"W0217 13:45:14.176886 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 13:45:14.177326 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771335914 cert, and key in /tmp/serving-cert-2529326345/serving-signer.crt, /tmp/serving-cert-2529326345/serving-signer.key\\\\nI0217 13:45:14.372413 1 observer_polling.go:159] Starting file observer\\\\nW0217 13:45:14.378661 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 13:45:14.378888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:45:14.382093 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529326345/tls.crt::/tmp/serving-cert-2529326345/tls.key\\\\\\\"\\\\nF0217 13:45:14.604297 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0450fd6122e12877e07084f693f67e63ac70bcecc962f7da5a32241ba14553\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:33Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:33 crc kubenswrapper[4833]: I0217 13:45:33.340756 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5gxjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03c9f579-21ad-4977-a5e8-db9272a08557\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s6vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5gxjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:33Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:33 crc kubenswrapper[4833]: I0217 13:45:33.382962 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c1afc7-6071-464b-aedc-11234d869afe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e36c4174a69c22c608db515a3fe558ec0292c53ae56a43a945c8bb19bf4cdf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bb5bc7e6f182dd6271a6d16d8441e1f6833a32fd03858a3b91d16a896339c2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e477bf1f25d554946b3657a5bd18f922b18483902fd4f9c05a452a299cb4d920\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2260fe18b8f9f829f7dde85c9bdfc32895c8bfe48e0986c8f8205708e078ff37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:33Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:33 crc kubenswrapper[4833]: I0217 13:45:33.426715 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a207ffb6a17f606fde32eec5ae4222e2fdf02d79f61e4f959be55814c14daf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:33Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:33 crc kubenswrapper[4833]: I0217 13:45:33.462605 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:33Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:33 crc kubenswrapper[4833]: I0217 13:45:33.509689 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vx9xx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d13dbe6c-a57b-4011-9987-193ccf4939f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6637ecd7299bd61ba9b47f093048b122c90fc98990eb22155e8467c1dfd75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w7kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vx9xx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:33Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:33 crc kubenswrapper[4833]: I0217 13:45:33.541922 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4a1ca83-1919-4f9c-82de-c849cbd50e70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nq5ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nq5ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nmzvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:33Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:33 crc kubenswrapper[4833]: I0217 13:45:33.582644 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wxvlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eb74ba2-a87a-415f-8978-8ef706346aa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2569baf4273e04fbca0b38e0a65c945ba4b2767cba6f2b6647012328c041bab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wxvlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:33Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:33 crc kubenswrapper[4833]: I0217 13:45:33.616630 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:45:33 crc kubenswrapper[4833]: E0217 13:45:33.616876 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:45:37.616860919 +0000 UTC m=+27.251960352 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:45:33 crc kubenswrapper[4833]: I0217 13:45:33.624072 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a207ffb6a17f606fde32eec5ae4222e2fdf02d79f61e4f959be55814c14daf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:33Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:33 crc kubenswrapper[4833]: I0217 13:45:33.664657 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b9b39aa9ac4f718b75bfeffba8e09bf2fefca68625037f565f3b237b6275db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:33Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:33 crc kubenswrapper[4833]: I0217 13:45:33.702937 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vx9xx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d13dbe6c-a57b-4011-9987-193ccf4939f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6637ecd7299bd61ba9b47f093048b122c90fc98990eb22155e8467c1dfd75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w7kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vx9xx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:33Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:33 crc kubenswrapper[4833]: I0217 13:45:33.717172 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:45:33 crc kubenswrapper[4833]: I0217 13:45:33.717216 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:45:33 crc kubenswrapper[4833]: I0217 13:45:33.717238 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:45:33 crc kubenswrapper[4833]: I0217 13:45:33.717257 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:45:33 crc kubenswrapper[4833]: E0217 13:45:33.717343 4833 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 13:45:33 crc kubenswrapper[4833]: E0217 13:45:33.717381 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 13:45:37.717368473 +0000 UTC m=+27.352467896 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 13:45:33 crc kubenswrapper[4833]: E0217 13:45:33.717604 4833 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 13:45:33 crc kubenswrapper[4833]: E0217 13:45:33.717628 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 13:45:37.717620209 +0000 UTC m=+27.352719642 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 13:45:33 crc kubenswrapper[4833]: E0217 13:45:33.717676 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 13:45:33 crc kubenswrapper[4833]: E0217 13:45:33.717686 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 13:45:33 crc kubenswrapper[4833]: E0217 13:45:33.717696 4833 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 13:45:33 crc kubenswrapper[4833]: E0217 13:45:33.717715 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 13:45:37.717708742 +0000 UTC m=+27.352808175 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 13:45:33 crc kubenswrapper[4833]: E0217 13:45:33.717750 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 13:45:33 crc kubenswrapper[4833]: E0217 13:45:33.717757 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 13:45:33 crc kubenswrapper[4833]: E0217 13:45:33.717765 4833 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 13:45:33 crc kubenswrapper[4833]: E0217 13:45:33.717783 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 13:45:37.717777043 +0000 UTC m=+27.352876486 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 13:45:33 crc kubenswrapper[4833]: I0217 13:45:33.745634 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4a1ca83-1919-4f9c-82de-c849cbd50e70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4ea9f4f4051db850a31f5e1081095664c8764e1033e81823e75cd0072d13a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nq5ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89a1061eccd396579d2995d2d4ee3bd83f65f7e06a951b4d77897e687a7dcb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nq5ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nmzvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:33Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:33 crc kubenswrapper[4833]: I0217 13:45:33.786337 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wxvlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eb74ba2-a87a-415f-8978-8ef706346aa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2569baf4273e04fbca0b38e0a65c945ba4b2767cba6f2b6647012328c041bab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2569baf4273e04fbca0b38e0a65c945ba4b2767cba6f2b6647012328c041bab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wxvlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:33Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:33 crc kubenswrapper[4833]: I0217 13:45:33.822600 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c1afc7-6071-464b-aedc-11234d869afe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e36c4174a69c22c608db515a3fe558ec0292c53ae56a43a945c8bb19bf4cdf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bb5bc7e6f182dd6271a6d16d8441e1f6833a32fd03858a3b91d16a896339c2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e477bf1f25d554946b3657a5bd18f922b18483902fd4f9c05a452a299cb4d920\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2260fe18b8f9f829f7dde85c9bdfc32895c8bfe48e0986c8f8205708e078ff37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:33Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:33 crc kubenswrapper[4833]: I0217 13:45:33.867105 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:33Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:33 crc kubenswrapper[4833]: I0217 13:45:33.901945 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:33Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:33 crc kubenswrapper[4833]: I0217 13:45:33.941745 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e5a2da13ff704ed5e89100bd3a88a6349d446c2af48f88acb399efb43dc42d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac53462f7a94ff04dc94b5065169da7ec43985a5a7e7dba779f23614b61b78fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:33Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:33 crc kubenswrapper[4833]: I0217 13:45:33.982832 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:33Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:33 crc kubenswrapper[4833]: I0217 13:45:33.991951 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 16:19:37.737551624 +0000 UTC Feb 17 13:45:34 crc kubenswrapper[4833]: I0217 13:45:34.024203 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wlt4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d2bf39d36b6e8998e442f36f84198a363d7f22f56a05957a4a242459e6ff8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9dxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wlt4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:34Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:34 crc kubenswrapper[4833]: I0217 13:45:34.040493 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:45:34 crc kubenswrapper[4833]: I0217 13:45:34.040525 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:45:34 crc kubenswrapper[4833]: E0217 13:45:34.040618 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:45:34 crc kubenswrapper[4833]: E0217 13:45:34.040687 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:45:34 crc kubenswrapper[4833]: I0217 13:45:34.069298 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72c5918a-056f-446c-b138-a1be7140a5b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7r9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:34Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:34 crc kubenswrapper[4833]: I0217 13:45:34.102077 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23fd4be9-debc-405d-ac05-5a5160593231\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f18bb5a144ef41de921b7b92ee560b76e3f9f9a626be4d5638db46e4688d256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84373535efe637bbd5cf8649523941f0e73ad51f06dd45c913741e2a8e7b775f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090ad37f43c1a891b1a2922b2a367e1b52ce56fd6a53e9e749f819bf281136a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://436df1e343e5ca9347dd98b9da03c74816b0bc0ec6cecd94dc3b21c38eea9b51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babf66dd8ad93003e9766e885c7626d4f6dab5bc46a6d714449a01065d7afea9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"message\\\":\\\"W0217 13:45:14.176886 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 13:45:14.177326 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771335914 cert, and key in /tmp/serving-cert-2529326345/serving-signer.crt, /tmp/serving-cert-2529326345/serving-signer.key\\\\nI0217 13:45:14.372413 1 observer_polling.go:159] Starting file observer\\\\nW0217 13:45:14.378661 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 13:45:14.378888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:45:14.382093 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529326345/tls.crt::/tmp/serving-cert-2529326345/tls.key\\\\\\\"\\\\nF0217 13:45:14.604297 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0450fd6122e12877e07084f693f67e63ac70bcecc962f7da5a32241ba14553\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:34Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:34 crc kubenswrapper[4833]: I0217 13:45:34.151396 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e08f74a-75d7-4dce-9297-27140e91962a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a2013b70df34194f895ea7c2e5f2650966fa2d06518733fa4411893e5b2b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbc961a623ca1f4237f238856a41acfb724aadd5fa3feda70410c352b6f750c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80bdd44d3b326c9bb61f21b221c63d4647e2cc6edc95fe568b90e76f4329cf1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9bc54d1057546b9c8d7d1527bb2ae93c1c713b4d6e5879477157af90fc0d89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e88eaef317b2beb0c55811634e6752439b7c7725072f55a42f5ab49c2ac2385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:34Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:34 crc kubenswrapper[4833]: I0217 13:45:34.167211 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" event={"ID":"72c5918a-056f-446c-b138-a1be7140a5b0","Type":"ContainerStarted","Data":"42e357a961b833cb840c8b33ce0db65d57cc5fb37789b00a6bcd3eb67d8e33f2"} Feb 17 13:45:34 crc kubenswrapper[4833]: I0217 13:45:34.169112 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wxvlq" event={"ID":"4eb74ba2-a87a-415f-8978-8ef706346aa3","Type":"ContainerStarted","Data":"ec44770454d862d13715225217505d12ffefd3f5a8dae0912a7ba8a9fbccb0bd"} Feb 17 13:45:34 crc kubenswrapper[4833]: I0217 13:45:34.169948 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-5gxjd" event={"ID":"03c9f579-21ad-4977-a5e8-db9272a08557","Type":"ContainerStarted","Data":"2fb78e6870f6a8113b993c701e3b411ebe8001eed0b0814d32356167a775c336"} Feb 17 13:45:34 crc kubenswrapper[4833]: I0217 13:45:34.180155 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5gxjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03c9f579-21ad-4977-a5e8-db9272a08557\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s6vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5gxjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:34Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:34 crc kubenswrapper[4833]: I0217 13:45:34.220747 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5gxjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03c9f579-21ad-4977-a5e8-db9272a08557\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s6vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5gxjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:34Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:34 crc kubenswrapper[4833]: I0217 13:45:34.262549 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vx9xx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d13dbe6c-a57b-4011-9987-193ccf4939f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6637ecd7299bd61ba9b47f093048b122c90fc98990eb22155e8467c1dfd75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w7kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vx9xx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:34Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:34 crc kubenswrapper[4833]: I0217 13:45:34.301510 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4a1ca83-1919-4f9c-82de-c849cbd50e70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4ea9f4f4051db850a31f5e1081095664c8764e1033e81823e75cd0072d13a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nq5ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89a1061eccd396579d2995d2d4ee3bd83f65f7e06a951b4d77897e687a7dcb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nq5ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nmzvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:34Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:34 crc kubenswrapper[4833]: I0217 13:45:34.344747 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wxvlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eb74ba2-a87a-415f-8978-8ef706346aa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2569baf4273e04fbca0b38e0a65c945ba4b2767cba6f2b6647012328c041bab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2569baf4273e04fbca0b38e0a65c945ba4b2767cba6f2b6647012328c041bab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec44770454d862d13715225217505d12ffefd3f5a8dae0912a7ba8a9fbccb0bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wxvlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:34Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:34 crc kubenswrapper[4833]: I0217 13:45:34.382206 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c1afc7-6071-464b-aedc-11234d869afe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e36c4174a69c22c608db515a3fe558ec0292c53ae56a43a945c8bb19bf4cdf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bb5bc7e6f182dd6271a6d16d8441e1f6833a32fd03858a3b91d16a896339c2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e477bf1f25d554946b3657a5bd18f922b18483902fd4f9c05a452a299cb4d920\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2260fe18b8f9f829f7dde85c9bdfc32895c8bfe48e0986c8f8205708e078ff37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:34Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:34 crc kubenswrapper[4833]: I0217 13:45:34.422395 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a207ffb6a17f606fde32eec5ae4222e2fdf02d79f61e4f959be55814c14daf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:34Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:34 crc kubenswrapper[4833]: I0217 13:45:34.471470 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b9b39aa9ac4f718b75bfeffba8e09bf2fefca68625037f565f3b237b6275db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:34Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:34 crc kubenswrapper[4833]: I0217 13:45:34.513230 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:34Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:34 crc kubenswrapper[4833]: I0217 13:45:34.544632 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e5a2da13ff704ed5e89100bd3a88a6349d446c2af48f88acb399efb43dc42d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac53462f7a94ff04dc94b5065169da7ec43985a5a7e7dba779f23614b61b78fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:34Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:34 crc kubenswrapper[4833]: I0217 13:45:34.582194 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:34Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:34 crc kubenswrapper[4833]: I0217 13:45:34.631029 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wlt4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d2bf39d36b6e8998e442f36f84198a363d7f22f56a05957a4a242459e6ff8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9dxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wlt4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:34Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:34 crc kubenswrapper[4833]: I0217 13:45:34.675179 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72c5918a-056f-446c-b138-a1be7140a5b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7r9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:34Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:34 crc kubenswrapper[4833]: I0217 13:45:34.705053 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23fd4be9-debc-405d-ac05-5a5160593231\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f18bb5a144ef41de921b7b92ee560b76e3f9f9a626be4d5638db46e4688d256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84373535efe637bbd5cf8649523941f0e73ad51f06dd45c913741e2a8e7b775f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090ad37f43c1a891b1a2922b2a367e1b52ce56fd6a53e9e749f819bf281136a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://436df1e343e5ca9347dd98b9da03c74816b0bc0ec6cecd94dc3b21c38eea9b51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babf66dd8ad93003e9766e885c7626d4f6dab5bc46a6d714449a01065d7afea9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"message\\\":\\\"W0217 13:45:14.176886 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 13:45:14.177326 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771335914 cert, and key in /tmp/serving-cert-2529326345/serving-signer.crt, /tmp/serving-cert-2529326345/serving-signer.key\\\\nI0217 13:45:14.372413 1 observer_polling.go:159] Starting file observer\\\\nW0217 13:45:14.378661 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 13:45:14.378888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:45:14.382093 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529326345/tls.crt::/tmp/serving-cert-2529326345/tls.key\\\\\\\"\\\\nF0217 13:45:14.604297 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0450fd6122e12877e07084f693f67e63ac70bcecc962f7da5a32241ba14553\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:34Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:34 crc kubenswrapper[4833]: I0217 13:45:34.750870 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e08f74a-75d7-4dce-9297-27140e91962a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a2013b70df34194f895ea7c2e5f2650966fa2d06518733fa4411893e5b2b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbc961a623ca1f4237f238856a41acfb724aadd5fa3feda70410c352b6f750c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80bdd44d3b326c9bb61f21b221c63d4647e2cc6edc95fe568b90e76f4329cf1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9bc54d1057546b9c8d7d1527bb2ae93c1c713b4d6e5879477157af90fc0d89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e88eaef317b2beb0c55811634e6752439b7c7725072f55a42f5ab49c2ac2385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:34Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:34 crc kubenswrapper[4833]: I0217 13:45:34.787879 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:34Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:34 crc kubenswrapper[4833]: I0217 13:45:34.993172 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 13:14:25.595180263 +0000 UTC Feb 17 13:45:35 crc kubenswrapper[4833]: I0217 13:45:35.041534 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:45:35 crc kubenswrapper[4833]: E0217 13:45:35.041649 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:45:35 crc kubenswrapper[4833]: I0217 13:45:35.173656 4833 generic.go:334] "Generic (PLEG): container finished" podID="4eb74ba2-a87a-415f-8978-8ef706346aa3" containerID="ec44770454d862d13715225217505d12ffefd3f5a8dae0912a7ba8a9fbccb0bd" exitCode=0 Feb 17 13:45:35 crc kubenswrapper[4833]: I0217 13:45:35.173731 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wxvlq" event={"ID":"4eb74ba2-a87a-415f-8978-8ef706346aa3","Type":"ContainerDied","Data":"ec44770454d862d13715225217505d12ffefd3f5a8dae0912a7ba8a9fbccb0bd"} Feb 17 13:45:35 crc kubenswrapper[4833]: I0217 13:45:35.175123 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-5gxjd" event={"ID":"03c9f579-21ad-4977-a5e8-db9272a08557","Type":"ContainerStarted","Data":"b7400b87694dd74f41819e23ce3f617021e028cdcbfdad65b2213249f35d176d"} Feb 17 13:45:35 crc kubenswrapper[4833]: I0217 13:45:35.181901 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" event={"ID":"72c5918a-056f-446c-b138-a1be7140a5b0","Type":"ContainerStarted","Data":"a89a54d9bedad8afe45f34ce0bc2324d0ca8590c328aad9d1557b2d92e5b81fc"} Feb 17 13:45:35 crc kubenswrapper[4833]: I0217 13:45:35.181942 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" event={"ID":"72c5918a-056f-446c-b138-a1be7140a5b0","Type":"ContainerStarted","Data":"e0c4e266b444d8ac1a67f9b5a6705f4b871e3930daf7707cd30b589a4cce24ec"} Feb 17 13:45:35 crc kubenswrapper[4833]: I0217 13:45:35.181951 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" event={"ID":"72c5918a-056f-446c-b138-a1be7140a5b0","Type":"ContainerStarted","Data":"ed9da7ba3cbbca328926f2372a7e581793373ec876c961f7988617c668d958b7"} Feb 17 13:45:35 crc kubenswrapper[4833]: I0217 13:45:35.188945 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:35Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:35 crc kubenswrapper[4833]: I0217 13:45:35.205774 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e5a2da13ff704ed5e89100bd3a88a6349d446c2af48f88acb399efb43dc42d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac53462f7a94ff04dc94b5065169da7ec43985a5a7e7dba779f23614b61b78fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:35Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:35 crc kubenswrapper[4833]: I0217 13:45:35.217807 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:35Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:35 crc kubenswrapper[4833]: I0217 13:45:35.231728 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:35Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:35 crc kubenswrapper[4833]: I0217 13:45:35.245442 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wlt4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d2bf39d36b6e8998e442f36f84198a363d7f22f56a05957a4a242459e6ff8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9dxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wlt4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:35Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:35 crc kubenswrapper[4833]: I0217 13:45:35.267812 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72c5918a-056f-446c-b138-a1be7140a5b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7r9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:35Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:35 crc kubenswrapper[4833]: I0217 13:45:35.281586 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23fd4be9-debc-405d-ac05-5a5160593231\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f18bb5a144ef41de921b7b92ee560b76e3f9f9a626be4d5638db46e4688d256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84373535efe637bbd5cf8649523941f0e73ad51f06dd45c913741e2a8e7b775f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090ad37f43c1a891b1a2922b2a367e1b52ce56fd6a53e9e749f819bf281136a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://436df1e343e5ca9347dd98b9da03c74816b0bc0ec6cecd94dc3b21c38eea9b51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babf66dd8ad93003e9766e885c7626d4f6dab5bc46a6d714449a01065d7afea9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"message\\\":\\\"W0217 13:45:14.176886 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 13:45:14.177326 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771335914 cert, and key in /tmp/serving-cert-2529326345/serving-signer.crt, /tmp/serving-cert-2529326345/serving-signer.key\\\\nI0217 13:45:14.372413 1 observer_polling.go:159] Starting file observer\\\\nW0217 13:45:14.378661 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 13:45:14.378888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:45:14.382093 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529326345/tls.crt::/tmp/serving-cert-2529326345/tls.key\\\\\\\"\\\\nF0217 13:45:14.604297 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0450fd6122e12877e07084f693f67e63ac70bcecc962f7da5a32241ba14553\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:35Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:35 crc kubenswrapper[4833]: I0217 13:45:35.298531 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e08f74a-75d7-4dce-9297-27140e91962a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a2013b70df34194f895ea7c2e5f2650966fa2d06518733fa4411893e5b2b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbc961a623ca1f4237f238856a41acfb724aadd5fa3feda70410c352b6f750c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80bdd44d3b326c9bb61f21b221c63d4647e2cc6edc95fe568b90e76f4329cf1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9bc54d1057546b9c8d7d1527bb2ae93c1c713b4d6e5879477157af90fc0d89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e88eaef317b2beb0c55811634e6752439b7c7725072f55a42f5ab49c2ac2385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:35Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:35 crc kubenswrapper[4833]: I0217 13:45:35.308882 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5gxjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03c9f579-21ad-4977-a5e8-db9272a08557\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s6vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5gxjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:35Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:35 crc kubenswrapper[4833]: I0217 13:45:35.320425 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b9b39aa9ac4f718b75bfeffba8e09bf2fefca68625037f565f3b237b6275db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:35Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:35 crc kubenswrapper[4833]: I0217 13:45:35.329938 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vx9xx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d13dbe6c-a57b-4011-9987-193ccf4939f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6637ecd7299bd61ba9b47f093048b122c90fc98990eb22155e8467c1dfd75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w7kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vx9xx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:35Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:35 crc kubenswrapper[4833]: I0217 13:45:35.341108 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4a1ca83-1919-4f9c-82de-c849cbd50e70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4ea9f4f4051db850a31f5e1081095664c8764e1033e81823e75cd0072d13a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nq5ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89a1061eccd396579d2995d2d4ee3bd83f65f7e06a951b4d77897e687a7dcb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nq5ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nmzvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:35Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:35 crc kubenswrapper[4833]: I0217 13:45:35.357561 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wxvlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eb74ba2-a87a-415f-8978-8ef706346aa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2569baf4273e04fbca0b38e0a65c945ba4b2767cba6f2b6647012328c041bab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2569baf4273e04fbca0b38e0a65c945ba4b2767cba6f2b6647012328c041bab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec44770454d862d13715225217505d12ffefd3f5a8dae0912a7ba8a9fbccb0bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec44770454d862d13715225217505d12ffefd3f5a8dae0912a7ba8a9fbccb0bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wxvlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:35Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:35 crc kubenswrapper[4833]: I0217 13:45:35.369832 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c1afc7-6071-464b-aedc-11234d869afe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e36c4174a69c22c608db515a3fe558ec0292c53ae56a43a945c8bb19bf4cdf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bb5bc7e6f182dd6271a6d16d8441e1f6833a32fd03858a3b91d16a896339c2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e477bf1f25d554946b3657a5bd18f922b18483902fd4f9c05a452a299cb4d920\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2260fe18b8f9f829f7dde85c9bdfc32895c8bfe48e0986c8f8205708e078ff37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:35Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:35 crc kubenswrapper[4833]: I0217 13:45:35.381930 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a207ffb6a17f606fde32eec5ae4222e2fdf02d79f61e4f959be55814c14daf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:35Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:35 crc kubenswrapper[4833]: I0217 13:45:35.422061 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23fd4be9-debc-405d-ac05-5a5160593231\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f18bb5a144ef41de921b7b92ee560b76e3f9f9a626be4d5638db46e4688d256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84373535efe637bbd5cf8649523941f0e73ad51f06dd45c913741e2a8e7b775f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090ad37f43c1a891b1a2922b2a367e1b52ce56fd6a53e9e749f819bf281136a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://436df1e343e5ca9347dd98b9da03c74816b0bc0ec6cecd94dc3b21c38eea9b51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babf66dd8ad93003e9766e885c7626d4f6dab5bc46a6d714449a01065d7afea9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"message\\\":\\\"W0217 13:45:14.176886 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 13:45:14.177326 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771335914 cert, and key in /tmp/serving-cert-2529326345/serving-signer.crt, /tmp/serving-cert-2529326345/serving-signer.key\\\\nI0217 13:45:14.372413 1 observer_polling.go:159] Starting file observer\\\\nW0217 13:45:14.378661 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 13:45:14.378888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:45:14.382093 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529326345/tls.crt::/tmp/serving-cert-2529326345/tls.key\\\\\\\"\\\\nF0217 13:45:14.604297 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0450fd6122e12877e07084f693f67e63ac70bcecc962f7da5a32241ba14553\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:35Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:35 crc kubenswrapper[4833]: I0217 13:45:35.473832 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e08f74a-75d7-4dce-9297-27140e91962a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a2013b70df34194f895ea7c2e5f2650966fa2d06518733fa4411893e5b2b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbc961a623ca1f4237f238856a41acfb724aadd5fa3feda70410c352b6f750c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80bdd44d3b326c9bb61f21b221c63d4647e2cc6edc95fe568b90e76f4329cf1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9bc54d1057546b9c8d7d1527bb2ae93c1c713b4d6e5879477157af90fc0d89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e88eaef317b2beb0c55811634e6752439b7c7725072f55a42f5ab49c2ac2385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:35Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:35 crc kubenswrapper[4833]: I0217 13:45:35.504473 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:35Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:35 crc kubenswrapper[4833]: I0217 13:45:35.545030 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wlt4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d2bf39d36b6e8998e442f36f84198a363d7f22f56a05957a4a242459e6ff8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9dxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wlt4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:35Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:35 crc kubenswrapper[4833]: I0217 13:45:35.592507 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72c5918a-056f-446c-b138-a1be7140a5b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7r9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:35Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:35 crc kubenswrapper[4833]: I0217 13:45:35.621974 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5gxjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03c9f579-21ad-4977-a5e8-db9272a08557\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7400b87694dd74f41819e23ce3f617021e028cdcbfdad65b2213249f35d176d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s6vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5gxjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:35Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:35 crc kubenswrapper[4833]: I0217 13:45:35.664871 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c1afc7-6071-464b-aedc-11234d869afe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e36c4174a69c22c608db515a3fe558ec0292c53ae56a43a945c8bb19bf4cdf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bb5bc7e6f182dd6271a6d16d8441e1f6833a32fd03858a3b91d16a896339c2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e477bf1f25d554946b3657a5bd18f922b18483902fd4f9c05a452a299cb4d920\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2260fe18b8f9f829f7dde85c9bdfc32895c8bfe48e0986c8f8205708e078ff37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:35Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:35 crc kubenswrapper[4833]: I0217 13:45:35.703308 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a207ffb6a17f606fde32eec5ae4222e2fdf02d79f61e4f959be55814c14daf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:35Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:35 crc kubenswrapper[4833]: I0217 13:45:35.742774 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b9b39aa9ac4f718b75bfeffba8e09bf2fefca68625037f565f3b237b6275db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:35Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:35 crc kubenswrapper[4833]: I0217 13:45:35.789743 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vx9xx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d13dbe6c-a57b-4011-9987-193ccf4939f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6637ecd7299bd61ba9b47f093048b122c90fc98990eb22155e8467c1dfd75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w7kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vx9xx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:35Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:35 crc kubenswrapper[4833]: I0217 13:45:35.824634 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4a1ca83-1919-4f9c-82de-c849cbd50e70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4ea9f4f4051db850a31f5e1081095664c8764e1033e81823e75cd0072d13a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nq5ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89a1061eccd396579d2995d2d4ee3bd83f65f7e06a951b4d77897e687a7dcb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nq5ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nmzvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:35Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:35 crc kubenswrapper[4833]: I0217 13:45:35.865105 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wxvlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eb74ba2-a87a-415f-8978-8ef706346aa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2569baf4273e04fbca0b38e0a65c945ba4b2767cba6f2b6647012328c041bab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2569baf4273e04fbca0b38e0a65c945ba4b2767cba6f2b6647012328c041bab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec44770454d862d13715225217505d12ffefd3f5a8dae0912a7ba8a9fbccb0bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec44770454d862d13715225217505d12ffefd3f5a8dae0912a7ba8a9fbccb0bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wxvlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:35Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:35 crc kubenswrapper[4833]: I0217 13:45:35.903172 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:35Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:35 crc kubenswrapper[4833]: I0217 13:45:35.947622 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e5a2da13ff704ed5e89100bd3a88a6349d446c2af48f88acb399efb43dc42d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac53462f7a94ff04dc94b5065169da7ec43985a5a7e7dba779f23614b61b78fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:35Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:35 crc kubenswrapper[4833]: I0217 13:45:35.987108 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:35Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:35 crc kubenswrapper[4833]: I0217 13:45:35.994235 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 10:53:56.567985793 +0000 UTC Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.040667 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.040706 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:45:36 crc kubenswrapper[4833]: E0217 13:45:36.040788 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:45:36 crc kubenswrapper[4833]: E0217 13:45:36.040884 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.162426 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.164940 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.164994 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.165012 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.165194 4833 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.171342 4833 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.171652 4833 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.172892 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.172929 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.172942 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.172966 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.172994 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:36Z","lastTransitionTime":"2026-02-17T13:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.189468 4833 generic.go:334] "Generic (PLEG): container finished" podID="4eb74ba2-a87a-415f-8978-8ef706346aa3" containerID="2d28180d5fc08486a1492239ab1107c9b3856334091da9d2ad7ae0e6086fbd00" exitCode=0 Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.189628 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wxvlq" event={"ID":"4eb74ba2-a87a-415f-8978-8ef706346aa3","Type":"ContainerDied","Data":"2d28180d5fc08486a1492239ab1107c9b3856334091da9d2ad7ae0e6086fbd00"} Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.197012 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" event={"ID":"72c5918a-056f-446c-b138-a1be7140a5b0","Type":"ContainerStarted","Data":"f5c74f469d2d07cfd265ee1f625b957efc0f9d1f0efd5f483c962abdfd66a080"} Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.197094 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" event={"ID":"72c5918a-056f-446c-b138-a1be7140a5b0","Type":"ContainerStarted","Data":"4ee6e033fa9b29f59b186e39ec83d162b40e36aab28bb1e568a54c351a8c0c79"} Feb 17 13:45:36 crc kubenswrapper[4833]: E0217 13:45:36.199528 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:45:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:45:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:45:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:45:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"858a93ea-15f0-4bac-8fa3-badb79f68871\\\",\\\"systemUUID\\\":\\\"648b67a2-27e7-447a-8ad2-7acc2e737df4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:36Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.205200 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.205241 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.205251 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.205267 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.205279 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:36Z","lastTransitionTime":"2026-02-17T13:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.208985 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23fd4be9-debc-405d-ac05-5a5160593231\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f18bb5a144ef41de921b7b92ee560b76e3f9f9a626be4d5638db46e4688d256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84373535efe637bbd5cf8649523941f0e73ad51f06dd45c913741e2a8e7b775f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090ad37f43c1a891b1a2922b2a367e1b52ce56fd6a53e9e749f819bf281136a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://436df1e343e5ca9347dd98b9da03c74816b0bc0ec6cecd94dc3b21c38eea9b51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babf66dd8ad93003e9766e885c7626d4f6dab5bc46a6d714449a01065d7afea9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"message\\\":\\\"W0217 13:45:14.176886 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 13:45:14.177326 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771335914 cert, and key in /tmp/serving-cert-2529326345/serving-signer.crt, /tmp/serving-cert-2529326345/serving-signer.key\\\\nI0217 13:45:14.372413 1 observer_polling.go:159] Starting file observer\\\\nW0217 13:45:14.378661 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 13:45:14.378888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:45:14.382093 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529326345/tls.crt::/tmp/serving-cert-2529326345/tls.key\\\\\\\"\\\\nF0217 13:45:14.604297 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0450fd6122e12877e07084f693f67e63ac70bcecc962f7da5a32241ba14553\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:36Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:36 crc kubenswrapper[4833]: E0217 13:45:36.219766 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:45:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:45:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:45:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:45:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"858a93ea-15f0-4bac-8fa3-badb79f68871\\\",\\\"systemUUID\\\":\\\"648b67a2-27e7-447a-8ad2-7acc2e737df4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:36Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.223793 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.223822 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.223833 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.223849 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.223861 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:36Z","lastTransitionTime":"2026-02-17T13:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.234506 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e08f74a-75d7-4dce-9297-27140e91962a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a2013b70df34194f895ea7c2e5f2650966fa2d06518733fa4411893e5b2b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbc961a623ca1f4237f238856a41acfb724aadd5fa3feda70410c352b6f750c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80bdd44d3b326c9bb61f21b221c63d4647e2cc6edc95fe568b90e76f4329cf1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9bc54d1057546b9c8d7d1527bb2ae93c1c713b4d6e5879477157af90fc0d89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e88eaef317b2beb0c55811634e6752439b7c7725072f55a42f5ab49c2ac2385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:36Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:36 crc kubenswrapper[4833]: E0217 13:45:36.239316 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:45:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:45:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:45:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:45:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"858a93ea-15f0-4bac-8fa3-badb79f68871\\\",\\\"systemUUID\\\":\\\"648b67a2-27e7-447a-8ad2-7acc2e737df4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:36Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.249320 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.249370 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.249383 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.249402 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.249415 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:36Z","lastTransitionTime":"2026-02-17T13:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.256960 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:36Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:36 crc kubenswrapper[4833]: E0217 13:45:36.262148 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:45:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:45:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:45:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:45:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"858a93ea-15f0-4bac-8fa3-badb79f68871\\\",\\\"systemUUID\\\":\\\"648b67a2-27e7-447a-8ad2-7acc2e737df4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:36Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.265704 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.265744 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.265753 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.265768 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.265779 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:36Z","lastTransitionTime":"2026-02-17T13:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.269629 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wlt4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d2bf39d36b6e8998e442f36f84198a363d7f22f56a05957a4a242459e6ff8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9dxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wlt4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:36Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:36 crc kubenswrapper[4833]: E0217 13:45:36.277056 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:45:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:45:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:45:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:45:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"858a93ea-15f0-4bac-8fa3-badb79f68871\\\",\\\"systemUUID\\\":\\\"648b67a2-27e7-447a-8ad2-7acc2e737df4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:36Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:36 crc kubenswrapper[4833]: E0217 13:45:36.277237 4833 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.279128 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.279155 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.279167 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.279210 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.279223 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:36Z","lastTransitionTime":"2026-02-17T13:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.288266 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72c5918a-056f-446c-b138-a1be7140a5b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7r9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:36Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.297865 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5gxjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03c9f579-21ad-4977-a5e8-db9272a08557\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7400b87694dd74f41819e23ce3f617021e028cdcbfdad65b2213249f35d176d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s6vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5gxjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:36Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.311591 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c1afc7-6071-464b-aedc-11234d869afe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e36c4174a69c22c608db515a3fe558ec0292c53ae56a43a945c8bb19bf4cdf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bb5bc7e6f182dd6271a6d16d8441e1f6833a32fd03858a3b91d16a896339c2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e477bf1f25d554946b3657a5bd18f922b18483902fd4f9c05a452a299cb4d920\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2260fe18b8f9f829f7dde85c9bdfc32895c8bfe48e0986c8f8205708e078ff37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:36Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.344400 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a207ffb6a17f606fde32eec5ae4222e2fdf02d79f61e4f959be55814c14daf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:36Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.382284 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.382342 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.382351 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.382366 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.382395 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:36Z","lastTransitionTime":"2026-02-17T13:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.382883 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b9b39aa9ac4f718b75bfeffba8e09bf2fefca68625037f565f3b237b6275db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:36Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.420955 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vx9xx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d13dbe6c-a57b-4011-9987-193ccf4939f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6637ecd7299bd61ba9b47f093048b122c90fc98990eb22155e8467c1dfd75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w7kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vx9xx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:36Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.462990 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4a1ca83-1919-4f9c-82de-c849cbd50e70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4ea9f4f4051db850a31f5e1081095664c8764e1033e81823e75cd0072d13a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nq5ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89a1061eccd396579d2995d2d4ee3bd83f65f7e06a951b4d77897e687a7dcb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nq5ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nmzvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:36Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.484487 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.484540 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.484551 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.484570 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.484583 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:36Z","lastTransitionTime":"2026-02-17T13:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.504067 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wxvlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eb74ba2-a87a-415f-8978-8ef706346aa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2569baf4273e04fbca0b38e0a65c945ba4b2767cba6f2b6647012328c041bab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2569baf4273e04fbca0b38e0a65c945ba4b2767cba6f2b6647012328c041bab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec44770454d862d13715225217505d12ffefd3f5a8dae0912a7ba8a9fbccb0bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec44770454d862d13715225217505d12ffefd3f5a8dae0912a7ba8a9fbccb0bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d28180d5fc08486a1492239ab1107c9b3856334091da9d2ad7ae0e6086fbd00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d28180d5fc08486a1492239ab1107c9b3856334091da9d2ad7ae0e6086fbd00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wxvlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:36Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.544433 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:36Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.582928 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e5a2da13ff704ed5e89100bd3a88a6349d446c2af48f88acb399efb43dc42d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac53462f7a94ff04dc94b5065169da7ec43985a5a7e7dba779f23614b61b78fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:36Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.586631 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.586670 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.586686 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.586702 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.586712 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:36Z","lastTransitionTime":"2026-02-17T13:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.623195 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:36Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.688643 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.688678 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.688689 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.688705 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.688716 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:36Z","lastTransitionTime":"2026-02-17T13:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.791375 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.791425 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.791435 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.791453 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.791465 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:36Z","lastTransitionTime":"2026-02-17T13:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.894580 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.894656 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.894679 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.894707 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.894729 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:36Z","lastTransitionTime":"2026-02-17T13:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.994604 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 15:50:41.212373066 +0000 UTC Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.997053 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.997083 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.997092 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.997108 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:36 crc kubenswrapper[4833]: I0217 13:45:36.997118 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:36Z","lastTransitionTime":"2026-02-17T13:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:37 crc kubenswrapper[4833]: I0217 13:45:37.040732 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:45:37 crc kubenswrapper[4833]: E0217 13:45:37.040865 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:45:37 crc kubenswrapper[4833]: I0217 13:45:37.100064 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:37 crc kubenswrapper[4833]: I0217 13:45:37.100140 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:37 crc kubenswrapper[4833]: I0217 13:45:37.100159 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:37 crc kubenswrapper[4833]: I0217 13:45:37.100183 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:37 crc kubenswrapper[4833]: I0217 13:45:37.100200 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:37Z","lastTransitionTime":"2026-02-17T13:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:37 crc kubenswrapper[4833]: I0217 13:45:37.202028 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:37 crc kubenswrapper[4833]: I0217 13:45:37.202086 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:37 crc kubenswrapper[4833]: I0217 13:45:37.202096 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:37 crc kubenswrapper[4833]: I0217 13:45:37.202109 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:37 crc kubenswrapper[4833]: I0217 13:45:37.202118 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:37Z","lastTransitionTime":"2026-02-17T13:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:37 crc kubenswrapper[4833]: I0217 13:45:37.203958 4833 generic.go:334] "Generic (PLEG): container finished" podID="4eb74ba2-a87a-415f-8978-8ef706346aa3" containerID="79641e6aa81b861f7dd3dadd0cc23f677727909d85f017c761bcbbb19450b5e9" exitCode=0 Feb 17 13:45:37 crc kubenswrapper[4833]: I0217 13:45:37.204013 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wxvlq" event={"ID":"4eb74ba2-a87a-415f-8978-8ef706346aa3","Type":"ContainerDied","Data":"79641e6aa81b861f7dd3dadd0cc23f677727909d85f017c761bcbbb19450b5e9"} Feb 17 13:45:37 crc kubenswrapper[4833]: I0217 13:45:37.223969 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:37Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:37 crc kubenswrapper[4833]: I0217 13:45:37.240652 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e5a2da13ff704ed5e89100bd3a88a6349d446c2af48f88acb399efb43dc42d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac53462f7a94ff04dc94b5065169da7ec43985a5a7e7dba779f23614b61b78fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:37Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:37 crc kubenswrapper[4833]: I0217 13:45:37.253661 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:37Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:37 crc kubenswrapper[4833]: I0217 13:45:37.272103 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wlt4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d2bf39d36b6e8998e442f36f84198a363d7f22f56a05957a4a242459e6ff8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9dxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wlt4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:37Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:37 crc kubenswrapper[4833]: I0217 13:45:37.289750 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72c5918a-056f-446c-b138-a1be7140a5b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7r9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:37Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:37 crc kubenswrapper[4833]: I0217 13:45:37.302597 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23fd4be9-debc-405d-ac05-5a5160593231\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f18bb5a144ef41de921b7b92ee560b76e3f9f9a626be4d5638db46e4688d256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84373535efe637bbd5cf8649523941f0e73ad51f06dd45c913741e2a8e7b775f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090ad37f43c1a891b1a2922b2a367e1b52ce56fd6a53e9e749f819bf281136a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://436df1e343e5ca9347dd98b9da03c74816b0bc0ec6cecd94dc3b21c38eea9b51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babf66dd8ad93003e9766e885c7626d4f6dab5bc46a6d714449a01065d7afea9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"message\\\":\\\"W0217 13:45:14.176886 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 13:45:14.177326 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771335914 cert, and key in /tmp/serving-cert-2529326345/serving-signer.crt, /tmp/serving-cert-2529326345/serving-signer.key\\\\nI0217 13:45:14.372413 1 observer_polling.go:159] Starting file observer\\\\nW0217 13:45:14.378661 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 13:45:14.378888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:45:14.382093 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529326345/tls.crt::/tmp/serving-cert-2529326345/tls.key\\\\\\\"\\\\nF0217 13:45:14.604297 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0450fd6122e12877e07084f693f67e63ac70bcecc962f7da5a32241ba14553\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:37Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:37 crc kubenswrapper[4833]: I0217 13:45:37.304572 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:37 crc kubenswrapper[4833]: I0217 13:45:37.304595 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:37 crc kubenswrapper[4833]: I0217 13:45:37.304603 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:37 crc kubenswrapper[4833]: I0217 13:45:37.304619 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:37 crc kubenswrapper[4833]: I0217 13:45:37.304630 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:37Z","lastTransitionTime":"2026-02-17T13:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:37 crc kubenswrapper[4833]: I0217 13:45:37.322146 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e08f74a-75d7-4dce-9297-27140e91962a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a2013b70df34194f895ea7c2e5f2650966fa2d06518733fa4411893e5b2b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbc961a623ca1f4237f238856a41acfb724aadd5fa3feda70410c352b6f750c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80bdd44d3b326c9bb61f21b221c63d4647e2cc6edc95fe568b90e76f4329cf1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9bc54d1057546b9c8d7d1527bb2ae93c1c713b4d6e5879477157af90fc0d89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e88eaef317b2beb0c55811634e6752439b7c7725072f55a42f5ab49c2ac2385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:37Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:37 crc kubenswrapper[4833]: I0217 13:45:37.333105 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:37Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:37 crc kubenswrapper[4833]: I0217 13:45:37.342383 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5gxjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03c9f579-21ad-4977-a5e8-db9272a08557\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7400b87694dd74f41819e23ce3f617021e028cdcbfdad65b2213249f35d176d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s6vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5gxjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:37Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:37 crc kubenswrapper[4833]: I0217 13:45:37.350823 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vx9xx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d13dbe6c-a57b-4011-9987-193ccf4939f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6637ecd7299bd61ba9b47f093048b122c90fc98990eb22155e8467c1dfd75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w7kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vx9xx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:37Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:37 crc kubenswrapper[4833]: I0217 13:45:37.360122 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4a1ca83-1919-4f9c-82de-c849cbd50e70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4ea9f4f4051db850a31f5e1081095664c8764e1033e81823e75cd0072d13a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nq5ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89a1061eccd396579d2995d2d4ee3bd83f65f7e06a951b4d77897e687a7dcb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nq5ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nmzvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:37Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:37 crc kubenswrapper[4833]: I0217 13:45:37.373584 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wxvlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eb74ba2-a87a-415f-8978-8ef706346aa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2569baf4273e04fbca0b38e0a65c945ba4b2767cba6f2b6647012328c041bab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2569baf4273e04fbca0b38e0a65c945ba4b2767cba6f2b6647012328c041bab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec44770454d862d13715225217505d12ffefd3f5a8dae0912a7ba8a9fbccb0bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec44770454d862d13715225217505d12ffefd3f5a8dae0912a7ba8a9fbccb0bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d28180d5fc08486a1492239ab1107c9b3856334091da9d2ad7ae0e6086fbd00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d28180d5fc08486a1492239ab1107c9b3856334091da9d2ad7ae0e6086fbd00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79641e6aa81b861f7dd3dadd0cc23f677727909d85f017c761bcbbb19450b5e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79641e6aa81b861f7dd3dadd0cc23f677727909d85f017c761bcbbb19450b5e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wxvlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:37Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:37 crc kubenswrapper[4833]: I0217 13:45:37.385555 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c1afc7-6071-464b-aedc-11234d869afe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e36c4174a69c22c608db515a3fe558ec0292c53ae56a43a945c8bb19bf4cdf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bb5bc7e6f182dd6271a6d16d8441e1f6833a32fd03858a3b91d16a896339c2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e477bf1f25d554946b3657a5bd18f922b18483902fd4f9c05a452a299cb4d920\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2260fe18b8f9f829f7dde85c9bdfc32895c8bfe48e0986c8f8205708e078ff37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:37Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:37 crc kubenswrapper[4833]: I0217 13:45:37.396546 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a207ffb6a17f606fde32eec5ae4222e2fdf02d79f61e4f959be55814c14daf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:37Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:37 crc kubenswrapper[4833]: I0217 13:45:37.408051 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:37 crc kubenswrapper[4833]: I0217 13:45:37.408085 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:37 crc kubenswrapper[4833]: I0217 13:45:37.408095 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:37 crc kubenswrapper[4833]: I0217 13:45:37.408110 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:37 crc kubenswrapper[4833]: I0217 13:45:37.408122 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:37Z","lastTransitionTime":"2026-02-17T13:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:37 crc kubenswrapper[4833]: I0217 13:45:37.408548 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b9b39aa9ac4f718b75bfeffba8e09bf2fefca68625037f565f3b237b6275db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:37Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:37 crc kubenswrapper[4833]: I0217 13:45:37.510746 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:37 crc kubenswrapper[4833]: I0217 13:45:37.511023 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:37 crc kubenswrapper[4833]: I0217 13:45:37.511122 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:37 crc kubenswrapper[4833]: I0217 13:45:37.511216 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:37 crc kubenswrapper[4833]: I0217 13:45:37.511307 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:37Z","lastTransitionTime":"2026-02-17T13:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:37 crc kubenswrapper[4833]: I0217 13:45:37.614824 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:37 crc kubenswrapper[4833]: I0217 13:45:37.615296 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:37 crc kubenswrapper[4833]: I0217 13:45:37.615374 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:37 crc kubenswrapper[4833]: I0217 13:45:37.615442 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:37 crc kubenswrapper[4833]: I0217 13:45:37.615495 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:37Z","lastTransitionTime":"2026-02-17T13:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:37 crc kubenswrapper[4833]: I0217 13:45:37.672639 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:45:37 crc kubenswrapper[4833]: E0217 13:45:37.672941 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:45:45.672915441 +0000 UTC m=+35.308014884 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:45:37 crc kubenswrapper[4833]: I0217 13:45:37.717555 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:37 crc kubenswrapper[4833]: I0217 13:45:37.717595 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:37 crc kubenswrapper[4833]: I0217 13:45:37.717607 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:37 crc kubenswrapper[4833]: I0217 13:45:37.717625 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:37 crc kubenswrapper[4833]: I0217 13:45:37.717638 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:37Z","lastTransitionTime":"2026-02-17T13:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:37 crc kubenswrapper[4833]: I0217 13:45:37.774565 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:45:37 crc kubenswrapper[4833]: I0217 13:45:37.774647 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:45:37 crc kubenswrapper[4833]: I0217 13:45:37.774689 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:45:37 crc kubenswrapper[4833]: I0217 13:45:37.774724 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:45:37 crc kubenswrapper[4833]: E0217 13:45:37.774832 4833 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 13:45:37 crc kubenswrapper[4833]: E0217 13:45:37.774903 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 13:45:45.774882791 +0000 UTC m=+35.409982254 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 13:45:37 crc kubenswrapper[4833]: E0217 13:45:37.775479 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 13:45:37 crc kubenswrapper[4833]: E0217 13:45:37.775517 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 13:45:37 crc kubenswrapper[4833]: E0217 13:45:37.775535 4833 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 13:45:37 crc kubenswrapper[4833]: E0217 13:45:37.775581 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 13:45:45.775566457 +0000 UTC m=+35.410665920 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 13:45:37 crc kubenswrapper[4833]: E0217 13:45:37.775659 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 13:45:37 crc kubenswrapper[4833]: E0217 13:45:37.775682 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 13:45:37 crc kubenswrapper[4833]: E0217 13:45:37.775696 4833 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 13:45:37 crc kubenswrapper[4833]: E0217 13:45:37.775736 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 13:45:45.775724321 +0000 UTC m=+35.410823794 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 13:45:37 crc kubenswrapper[4833]: E0217 13:45:37.775805 4833 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 13:45:37 crc kubenswrapper[4833]: E0217 13:45:37.775850 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 13:45:45.775837634 +0000 UTC m=+35.410937107 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 13:45:37 crc kubenswrapper[4833]: I0217 13:45:37.819344 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:37 crc kubenswrapper[4833]: I0217 13:45:37.819376 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:37 crc kubenswrapper[4833]: I0217 13:45:37.819387 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:37 crc kubenswrapper[4833]: I0217 13:45:37.819402 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:37 crc kubenswrapper[4833]: I0217 13:45:37.819411 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:37Z","lastTransitionTime":"2026-02-17T13:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:37 crc kubenswrapper[4833]: I0217 13:45:37.922185 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:37 crc kubenswrapper[4833]: I0217 13:45:37.922239 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:37 crc kubenswrapper[4833]: I0217 13:45:37.922248 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:37 crc kubenswrapper[4833]: I0217 13:45:37.922262 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:37 crc kubenswrapper[4833]: I0217 13:45:37.922273 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:37Z","lastTransitionTime":"2026-02-17T13:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:37 crc kubenswrapper[4833]: I0217 13:45:37.995205 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 03:30:46.931373142 +0000 UTC Feb 17 13:45:38 crc kubenswrapper[4833]: I0217 13:45:38.025179 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:38 crc kubenswrapper[4833]: I0217 13:45:38.025224 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:38 crc kubenswrapper[4833]: I0217 13:45:38.025250 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:38 crc kubenswrapper[4833]: I0217 13:45:38.025270 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:38 crc kubenswrapper[4833]: I0217 13:45:38.025284 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:38Z","lastTransitionTime":"2026-02-17T13:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:38 crc kubenswrapper[4833]: I0217 13:45:38.040598 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:45:38 crc kubenswrapper[4833]: I0217 13:45:38.040664 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:45:38 crc kubenswrapper[4833]: E0217 13:45:38.040751 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:45:38 crc kubenswrapper[4833]: E0217 13:45:38.040852 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:45:38 crc kubenswrapper[4833]: I0217 13:45:38.127462 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:38 crc kubenswrapper[4833]: I0217 13:45:38.127526 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:38 crc kubenswrapper[4833]: I0217 13:45:38.127550 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:38 crc kubenswrapper[4833]: I0217 13:45:38.127577 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:38 crc kubenswrapper[4833]: I0217 13:45:38.127595 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:38Z","lastTransitionTime":"2026-02-17T13:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:38 crc kubenswrapper[4833]: I0217 13:45:38.210699 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" event={"ID":"72c5918a-056f-446c-b138-a1be7140a5b0","Type":"ContainerStarted","Data":"8fdc1a29a9f5b3be51beb5d28d7a09f671e4f9effeb9ed122da196da8623abe2"} Feb 17 13:45:38 crc kubenswrapper[4833]: I0217 13:45:38.213210 4833 generic.go:334] "Generic (PLEG): container finished" podID="4eb74ba2-a87a-415f-8978-8ef706346aa3" containerID="39f4ea02b7d74057eddb0946756236afdb129b5ad326b153d66357ee42caca78" exitCode=0 Feb 17 13:45:38 crc kubenswrapper[4833]: I0217 13:45:38.213260 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wxvlq" event={"ID":"4eb74ba2-a87a-415f-8978-8ef706346aa3","Type":"ContainerDied","Data":"39f4ea02b7d74057eddb0946756236afdb129b5ad326b153d66357ee42caca78"} Feb 17 13:45:38 crc kubenswrapper[4833]: I0217 13:45:38.224816 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5gxjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03c9f579-21ad-4977-a5e8-db9272a08557\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7400b87694dd74f41819e23ce3f617021e028cdcbfdad65b2213249f35d176d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s6vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5gxjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:38Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:38 crc kubenswrapper[4833]: I0217 13:45:38.232772 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:38 crc kubenswrapper[4833]: I0217 13:45:38.232807 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:38 crc kubenswrapper[4833]: I0217 13:45:38.232819 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:38 crc kubenswrapper[4833]: I0217 13:45:38.232835 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:38 crc kubenswrapper[4833]: I0217 13:45:38.232848 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:38Z","lastTransitionTime":"2026-02-17T13:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:38 crc kubenswrapper[4833]: I0217 13:45:38.240571 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wxvlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eb74ba2-a87a-415f-8978-8ef706346aa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2569baf4273e04fbca0b38e0a65c945ba4b2767cba6f2b6647012328c041bab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2569baf4273e04fbca0b38e0a65c945ba4b2767cba6f2b6647012328c041bab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec44770454d862d13715225217505d12ffefd3f5a8dae0912a7ba8a9fbccb0bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec44770454d862d13715225217505d12ffefd3f5a8dae0912a7ba8a9fbccb0bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d28180d5fc08486a1492239ab1107c9b3856334091da9d2ad7ae0e6086fbd00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d28180d5fc08486a1492239ab1107c9b3856334091da9d2ad7ae0e6086fbd00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79641e6aa81b861f7dd3dadd0cc23f677727909d85f017c761bcbbb19450b5e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79641e6aa81b861f7dd3dadd0cc23f677727909d85f017c761bcbbb19450b5e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f4ea02b7d74057eddb0946756236afdb129b5ad326b153d66357ee42caca78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f4ea02b7d74057eddb0946756236afdb129b5ad326b153d66357ee42caca78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wxvlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:38Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:38 crc kubenswrapper[4833]: I0217 13:45:38.255644 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c1afc7-6071-464b-aedc-11234d869afe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e36c4174a69c22c608db515a3fe558ec0292c53ae56a43a945c8bb19bf4cdf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bb5bc7e6f182dd6271a6d16d8441e1f6833a32fd03858a3b91d16a896339c2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e477bf1f25d554946b3657a5bd18f922b18483902fd4f9c05a452a299cb4d920\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2260fe18b8f9f829f7dde85c9bdfc32895c8bfe48e0986c8f8205708e078ff37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:38Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:38 crc kubenswrapper[4833]: I0217 13:45:38.267989 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a207ffb6a17f606fde32eec5ae4222e2fdf02d79f61e4f959be55814c14daf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:38Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:38 crc kubenswrapper[4833]: I0217 13:45:38.278901 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b9b39aa9ac4f718b75bfeffba8e09bf2fefca68625037f565f3b237b6275db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:38Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:38 crc kubenswrapper[4833]: I0217 13:45:38.288579 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vx9xx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d13dbe6c-a57b-4011-9987-193ccf4939f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6637ecd7299bd61ba9b47f093048b122c90fc98990eb22155e8467c1dfd75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w7kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vx9xx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:38Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:38 crc kubenswrapper[4833]: I0217 13:45:38.300840 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4a1ca83-1919-4f9c-82de-c849cbd50e70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4ea9f4f4051db850a31f5e1081095664c8764e1033e81823e75cd0072d13a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nq5ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89a1061eccd396579d2995d2d4ee3bd83f65f7e06a951b4d77897e687a7dcb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nq5ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nmzvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:38Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:38 crc kubenswrapper[4833]: I0217 13:45:38.312706 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:38Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:38 crc kubenswrapper[4833]: I0217 13:45:38.324032 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e5a2da13ff704ed5e89100bd3a88a6349d446c2af48f88acb399efb43dc42d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac53462f7a94ff04dc94b5065169da7ec43985a5a7e7dba779f23614b61b78fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:38Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:38 crc kubenswrapper[4833]: I0217 13:45:38.334435 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:38Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:38 crc kubenswrapper[4833]: I0217 13:45:38.336559 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:38 crc kubenswrapper[4833]: I0217 13:45:38.336611 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:38 crc kubenswrapper[4833]: I0217 13:45:38.336646 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:38 crc kubenswrapper[4833]: I0217 13:45:38.336669 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:38 crc kubenswrapper[4833]: I0217 13:45:38.336720 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:38Z","lastTransitionTime":"2026-02-17T13:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:38 crc kubenswrapper[4833]: I0217 13:45:38.385968 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23fd4be9-debc-405d-ac05-5a5160593231\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f18bb5a144ef41de921b7b92ee560b76e3f9f9a626be4d5638db46e4688d256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84373535efe637bbd5cf8649523941f0e73ad51f06dd45c913741e2a8e7b775f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090ad37f43c1a891b1a2922b2a367e1b52ce56fd6a53e9e749f819bf281136a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://436df1e343e5ca9347dd98b9da03c74816b0bc0ec6cecd94dc3b21c38eea9b51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babf66dd8ad93003e9766e885c7626d4f6dab5bc46a6d714449a01065d7afea9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"message\\\":\\\"W0217 13:45:14.176886 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 13:45:14.177326 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771335914 cert, and key in /tmp/serving-cert-2529326345/serving-signer.crt, /tmp/serving-cert-2529326345/serving-signer.key\\\\nI0217 13:45:14.372413 1 observer_polling.go:159] Starting file observer\\\\nW0217 13:45:14.378661 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 13:45:14.378888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:45:14.382093 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529326345/tls.crt::/tmp/serving-cert-2529326345/tls.key\\\\\\\"\\\\nF0217 13:45:14.604297 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0450fd6122e12877e07084f693f67e63ac70bcecc962f7da5a32241ba14553\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:38Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:38 crc kubenswrapper[4833]: I0217 13:45:38.403300 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e08f74a-75d7-4dce-9297-27140e91962a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a2013b70df34194f895ea7c2e5f2650966fa2d06518733fa4411893e5b2b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbc961a623ca1f4237f238856a41acfb724aadd5fa3feda70410c352b6f750c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80bdd44d3b326c9bb61f21b221c63d4647e2cc6edc95fe568b90e76f4329cf1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9bc54d1057546b9c8d7d1527bb2ae93c1c713b4d6e5879477157af90fc0d89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e88eaef317b2beb0c55811634e6752439b7c7725072f55a42f5ab49c2ac2385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:38Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:38 crc kubenswrapper[4833]: I0217 13:45:38.414392 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:38Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:38 crc kubenswrapper[4833]: I0217 13:45:38.428499 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wlt4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d2bf39d36b6e8998e442f36f84198a363d7f22f56a05957a4a242459e6ff8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9dxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wlt4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:38Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:38 crc kubenswrapper[4833]: I0217 13:45:38.439130 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:38 crc kubenswrapper[4833]: I0217 13:45:38.439161 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:38 crc kubenswrapper[4833]: I0217 13:45:38.439174 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:38 crc kubenswrapper[4833]: I0217 13:45:38.439190 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:38 crc kubenswrapper[4833]: I0217 13:45:38.439202 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:38Z","lastTransitionTime":"2026-02-17T13:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:38 crc kubenswrapper[4833]: I0217 13:45:38.446796 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72c5918a-056f-446c-b138-a1be7140a5b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7r9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:38Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:38 crc kubenswrapper[4833]: I0217 13:45:38.541640 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:38 crc kubenswrapper[4833]: I0217 13:45:38.541685 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:38 crc kubenswrapper[4833]: I0217 13:45:38.541694 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:38 crc kubenswrapper[4833]: I0217 13:45:38.541709 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:38 crc kubenswrapper[4833]: I0217 13:45:38.541718 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:38Z","lastTransitionTime":"2026-02-17T13:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:38 crc kubenswrapper[4833]: I0217 13:45:38.644101 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:38 crc kubenswrapper[4833]: I0217 13:45:38.644143 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:38 crc kubenswrapper[4833]: I0217 13:45:38.644153 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:38 crc kubenswrapper[4833]: I0217 13:45:38.644169 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:38 crc kubenswrapper[4833]: I0217 13:45:38.644181 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:38Z","lastTransitionTime":"2026-02-17T13:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:38 crc kubenswrapper[4833]: I0217 13:45:38.746602 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:38 crc kubenswrapper[4833]: I0217 13:45:38.746646 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:38 crc kubenswrapper[4833]: I0217 13:45:38.746656 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:38 crc kubenswrapper[4833]: I0217 13:45:38.746671 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:38 crc kubenswrapper[4833]: I0217 13:45:38.746682 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:38Z","lastTransitionTime":"2026-02-17T13:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:38 crc kubenswrapper[4833]: I0217 13:45:38.848554 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:38 crc kubenswrapper[4833]: I0217 13:45:38.848627 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:38 crc kubenswrapper[4833]: I0217 13:45:38.848644 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:38 crc kubenswrapper[4833]: I0217 13:45:38.848666 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:38 crc kubenswrapper[4833]: I0217 13:45:38.848678 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:38Z","lastTransitionTime":"2026-02-17T13:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:38 crc kubenswrapper[4833]: I0217 13:45:38.951718 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:38 crc kubenswrapper[4833]: I0217 13:45:38.951785 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:38 crc kubenswrapper[4833]: I0217 13:45:38.951801 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:38 crc kubenswrapper[4833]: I0217 13:45:38.951829 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:38 crc kubenswrapper[4833]: I0217 13:45:38.951846 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:38Z","lastTransitionTime":"2026-02-17T13:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:38 crc kubenswrapper[4833]: I0217 13:45:38.995616 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 21:34:03.824787284 +0000 UTC Feb 17 13:45:39 crc kubenswrapper[4833]: I0217 13:45:39.041267 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:45:39 crc kubenswrapper[4833]: E0217 13:45:39.041408 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:45:39 crc kubenswrapper[4833]: I0217 13:45:39.054246 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:39 crc kubenswrapper[4833]: I0217 13:45:39.054297 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:39 crc kubenswrapper[4833]: I0217 13:45:39.054312 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:39 crc kubenswrapper[4833]: I0217 13:45:39.054331 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:39 crc kubenswrapper[4833]: I0217 13:45:39.054347 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:39Z","lastTransitionTime":"2026-02-17T13:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:39 crc kubenswrapper[4833]: I0217 13:45:39.156865 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:39 crc kubenswrapper[4833]: I0217 13:45:39.156927 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:39 crc kubenswrapper[4833]: I0217 13:45:39.156940 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:39 crc kubenswrapper[4833]: I0217 13:45:39.156957 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:39 crc kubenswrapper[4833]: I0217 13:45:39.156967 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:39Z","lastTransitionTime":"2026-02-17T13:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:39 crc kubenswrapper[4833]: I0217 13:45:39.219629 4833 generic.go:334] "Generic (PLEG): container finished" podID="4eb74ba2-a87a-415f-8978-8ef706346aa3" containerID="bcd939d8a8deab6a94b6df237bd627850f5bb37bd29bd0988d16452dcbd87c9a" exitCode=0 Feb 17 13:45:39 crc kubenswrapper[4833]: I0217 13:45:39.219680 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wxvlq" event={"ID":"4eb74ba2-a87a-415f-8978-8ef706346aa3","Type":"ContainerDied","Data":"bcd939d8a8deab6a94b6df237bd627850f5bb37bd29bd0988d16452dcbd87c9a"} Feb 17 13:45:39 crc kubenswrapper[4833]: I0217 13:45:39.237283 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vx9xx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d13dbe6c-a57b-4011-9987-193ccf4939f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6637ecd7299bd61ba9b47f093048b122c90fc98990eb22155e8467c1dfd75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w7kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vx9xx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:39Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:39 crc kubenswrapper[4833]: I0217 13:45:39.252581 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4a1ca83-1919-4f9c-82de-c849cbd50e70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4ea9f4f4051db850a31f5e1081095664c8764e1033e81823e75cd0072d13a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nq5ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89a1061eccd396579d2995d2d4ee3bd83f65f7e06a951b4d77897e687a7dcb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nq5ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nmzvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:39Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:39 crc kubenswrapper[4833]: I0217 13:45:39.262654 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:39 crc kubenswrapper[4833]: I0217 13:45:39.262719 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:39 crc kubenswrapper[4833]: I0217 13:45:39.262733 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:39 crc kubenswrapper[4833]: I0217 13:45:39.262751 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:39 crc kubenswrapper[4833]: I0217 13:45:39.262764 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:39Z","lastTransitionTime":"2026-02-17T13:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:39 crc kubenswrapper[4833]: I0217 13:45:39.268270 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wxvlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eb74ba2-a87a-415f-8978-8ef706346aa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2569baf4273e04fbca0b38e0a65c945ba4b2767cba6f2b6647012328c041bab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2569baf4273e04fbca0b38e0a65c945ba4b2767cba6f2b6647012328c041bab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec44770454d862d13715225217505d12ffefd3f5a8dae0912a7ba8a9fbccb0bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec44770454d862d13715225217505d12ffefd3f5a8dae0912a7ba8a9fbccb0bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d28180d5fc08486a1492239ab1107c9b3856334091da9d2ad7ae0e6086fbd00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d28180d5fc08486a1492239ab1107c9b3856334091da9d2ad7ae0e6086fbd00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79641e6aa81b861f7dd3dadd0cc23f677727909d85f017c761bcbbb19450b5e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79641e6aa81b861f7dd3dadd0cc23f677727909d85f017c761bcbbb19450b5e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f4ea02b7d74057eddb0946756236afdb129b5ad326b153d66357ee42caca78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f4ea02b7d74057eddb0946756236afdb129b5ad326b153d66357ee42caca78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcd939d8a8deab6a94b6df237bd627850f5bb37bd29bd0988d16452dcbd87c9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcd939d8a8deab6a94b6df237bd627850f5bb37bd29bd0988d16452dcbd87c9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wxvlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:39Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:39 crc kubenswrapper[4833]: I0217 13:45:39.285329 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c1afc7-6071-464b-aedc-11234d869afe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e36c4174a69c22c608db515a3fe558ec0292c53ae56a43a945c8bb19bf4cdf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bb5bc7e6f182dd6271a6d16d8441e1f6833a32fd03858a3b91d16a896339c2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e477bf1f25d554946b3657a5bd18f922b18483902fd4f9c05a452a299cb4d920\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2260fe18b8f9f829f7dde85c9bdfc32895c8bfe48e0986c8f8205708e078ff37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:39Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:39 crc kubenswrapper[4833]: I0217 13:45:39.299783 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a207ffb6a17f606fde32eec5ae4222e2fdf02d79f61e4f959be55814c14daf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:39Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:39 crc kubenswrapper[4833]: I0217 13:45:39.312259 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b9b39aa9ac4f718b75bfeffba8e09bf2fefca68625037f565f3b237b6275db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:39Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:39 crc kubenswrapper[4833]: I0217 13:45:39.323381 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:39Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:39 crc kubenswrapper[4833]: I0217 13:45:39.334199 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e5a2da13ff704ed5e89100bd3a88a6349d446c2af48f88acb399efb43dc42d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac53462f7a94ff04dc94b5065169da7ec43985a5a7e7dba779f23614b61b78fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:39Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:39 crc kubenswrapper[4833]: I0217 13:45:39.345942 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:39Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:39 crc kubenswrapper[4833]: I0217 13:45:39.358953 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wlt4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d2bf39d36b6e8998e442f36f84198a363d7f22f56a05957a4a242459e6ff8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9dxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wlt4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:39Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:39 crc kubenswrapper[4833]: I0217 13:45:39.365480 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:39 crc kubenswrapper[4833]: I0217 13:45:39.365518 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:39 crc kubenswrapper[4833]: I0217 13:45:39.365529 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:39 crc kubenswrapper[4833]: I0217 13:45:39.365545 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:39 crc kubenswrapper[4833]: I0217 13:45:39.365556 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:39Z","lastTransitionTime":"2026-02-17T13:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:39 crc kubenswrapper[4833]: I0217 13:45:39.377182 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72c5918a-056f-446c-b138-a1be7140a5b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7r9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:39Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:39 crc kubenswrapper[4833]: I0217 13:45:39.391234 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23fd4be9-debc-405d-ac05-5a5160593231\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f18bb5a144ef41de921b7b92ee560b76e3f9f9a626be4d5638db46e4688d256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84373535efe637bbd5cf8649523941f0e73ad51f06dd45c913741e2a8e7b775f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090ad37f43c1a891b1a2922b2a367e1b52ce56fd6a53e9e749f819bf281136a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://436df1e343e5ca9347dd98b9da03c74816b0bc0ec6cecd94dc3b21c38eea9b51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babf66dd8ad93003e9766e885c7626d4f6dab5bc46a6d714449a01065d7afea9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"message\\\":\\\"W0217 13:45:14.176886 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 13:45:14.177326 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771335914 cert, and key in /tmp/serving-cert-2529326345/serving-signer.crt, /tmp/serving-cert-2529326345/serving-signer.key\\\\nI0217 13:45:14.372413 1 observer_polling.go:159] Starting file observer\\\\nW0217 13:45:14.378661 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 13:45:14.378888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:45:14.382093 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529326345/tls.crt::/tmp/serving-cert-2529326345/tls.key\\\\\\\"\\\\nF0217 13:45:14.604297 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0450fd6122e12877e07084f693f67e63ac70bcecc962f7da5a32241ba14553\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:39Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:39 crc kubenswrapper[4833]: I0217 13:45:39.409159 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e08f74a-75d7-4dce-9297-27140e91962a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a2013b70df34194f895ea7c2e5f2650966fa2d06518733fa4411893e5b2b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbc961a623ca1f4237f238856a41acfb724aadd5fa3feda70410c352b6f750c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80bdd44d3b326c9bb61f21b221c63d4647e2cc6edc95fe568b90e76f4329cf1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9bc54d1057546b9c8d7d1527bb2ae93c1c713b4d6e5879477157af90fc0d89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e88eaef317b2beb0c55811634e6752439b7c7725072f55a42f5ab49c2ac2385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:39Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:39 crc kubenswrapper[4833]: I0217 13:45:39.419413 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:39Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:39 crc kubenswrapper[4833]: I0217 13:45:39.428426 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5gxjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03c9f579-21ad-4977-a5e8-db9272a08557\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7400b87694dd74f41819e23ce3f617021e028cdcbfdad65b2213249f35d176d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s6vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5gxjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:39Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:39 crc kubenswrapper[4833]: I0217 13:45:39.468106 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:39 crc kubenswrapper[4833]: I0217 13:45:39.468141 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:39 crc kubenswrapper[4833]: I0217 13:45:39.468152 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:39 crc kubenswrapper[4833]: I0217 13:45:39.468168 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:39 crc kubenswrapper[4833]: I0217 13:45:39.468179 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:39Z","lastTransitionTime":"2026-02-17T13:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:39 crc kubenswrapper[4833]: I0217 13:45:39.571059 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:39 crc kubenswrapper[4833]: I0217 13:45:39.571113 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:39 crc kubenswrapper[4833]: I0217 13:45:39.571123 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:39 crc kubenswrapper[4833]: I0217 13:45:39.571137 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:39 crc kubenswrapper[4833]: I0217 13:45:39.571147 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:39Z","lastTransitionTime":"2026-02-17T13:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:39 crc kubenswrapper[4833]: I0217 13:45:39.673433 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:39 crc kubenswrapper[4833]: I0217 13:45:39.673483 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:39 crc kubenswrapper[4833]: I0217 13:45:39.673495 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:39 crc kubenswrapper[4833]: I0217 13:45:39.673536 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:39 crc kubenswrapper[4833]: I0217 13:45:39.673548 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:39Z","lastTransitionTime":"2026-02-17T13:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:39 crc kubenswrapper[4833]: I0217 13:45:39.776985 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:39 crc kubenswrapper[4833]: I0217 13:45:39.777029 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:39 crc kubenswrapper[4833]: I0217 13:45:39.777060 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:39 crc kubenswrapper[4833]: I0217 13:45:39.777079 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:39 crc kubenswrapper[4833]: I0217 13:45:39.777091 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:39Z","lastTransitionTime":"2026-02-17T13:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:39 crc kubenswrapper[4833]: I0217 13:45:39.879604 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:39 crc kubenswrapper[4833]: I0217 13:45:39.879667 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:39 crc kubenswrapper[4833]: I0217 13:45:39.879689 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:39 crc kubenswrapper[4833]: I0217 13:45:39.879717 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:39 crc kubenswrapper[4833]: I0217 13:45:39.879738 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:39Z","lastTransitionTime":"2026-02-17T13:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:39 crc kubenswrapper[4833]: I0217 13:45:39.982488 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:39 crc kubenswrapper[4833]: I0217 13:45:39.982874 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:39 crc kubenswrapper[4833]: I0217 13:45:39.982958 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:39 crc kubenswrapper[4833]: I0217 13:45:39.983040 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:39 crc kubenswrapper[4833]: I0217 13:45:39.983131 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:39Z","lastTransitionTime":"2026-02-17T13:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:39 crc kubenswrapper[4833]: I0217 13:45:39.996596 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 11:33:42.38690956 +0000 UTC Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.040912 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.040923 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:45:40 crc kubenswrapper[4833]: E0217 13:45:40.041095 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:45:40 crc kubenswrapper[4833]: E0217 13:45:40.041245 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.084963 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.084999 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.085011 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.085028 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.085056 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:40Z","lastTransitionTime":"2026-02-17T13:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.187496 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.187528 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.187536 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.187548 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.187556 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:40Z","lastTransitionTime":"2026-02-17T13:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.227808 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wxvlq" event={"ID":"4eb74ba2-a87a-415f-8978-8ef706346aa3","Type":"ContainerStarted","Data":"f08f2aa3f0028f17b13b73c760f1471c3f335f3ae02be305bddd63552ec7330b"} Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.232738 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" event={"ID":"72c5918a-056f-446c-b138-a1be7140a5b0","Type":"ContainerStarted","Data":"71def7160da0ac0cc97d8dd4f1caa950c70b5a0af4430a79fdea469b28a6b72d"} Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.233185 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.243927 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:40Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.270441 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e5a2da13ff704ed5e89100bd3a88a6349d446c2af48f88acb399efb43dc42d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac53462f7a94ff04dc94b5065169da7ec43985a5a7e7dba779f23614b61b78fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:40Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.289474 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.289509 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.289522 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.289537 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.289548 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:40Z","lastTransitionTime":"2026-02-17T13:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.290093 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.306089 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:40Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.320526 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23fd4be9-debc-405d-ac05-5a5160593231\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f18bb5a144ef41de921b7b92ee560b76e3f9f9a626be4d5638db46e4688d256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84373535efe637bbd5cf8649523941f0e73ad51f06dd45c913741e2a8e7b775f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090ad37f43c1a891b1a2922b2a367e1b52ce56fd6a53e9e749f819bf281136a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://436df1e343e5ca9347dd98b9da03c74816b0bc0ec6cecd94dc3b21c38eea9b51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babf66dd8ad93003e9766e885c7626d4f6dab5bc46a6d714449a01065d7afea9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"message\\\":\\\"W0217 13:45:14.176886 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 13:45:14.177326 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771335914 cert, and key in /tmp/serving-cert-2529326345/serving-signer.crt, /tmp/serving-cert-2529326345/serving-signer.key\\\\nI0217 13:45:14.372413 1 observer_polling.go:159] Starting file observer\\\\nW0217 13:45:14.378661 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 13:45:14.378888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:45:14.382093 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529326345/tls.crt::/tmp/serving-cert-2529326345/tls.key\\\\\\\"\\\\nF0217 13:45:14.604297 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0450fd6122e12877e07084f693f67e63ac70bcecc962f7da5a32241ba14553\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:40Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.337880 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e08f74a-75d7-4dce-9297-27140e91962a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a2013b70df34194f895ea7c2e5f2650966fa2d06518733fa4411893e5b2b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbc961a623ca1f4237f238856a41acfb724aadd5fa3feda70410c352b6f750c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80bdd44d3b326c9bb61f21b221c63d4647e2cc6edc95fe568b90e76f4329cf1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9bc54d1057546b9c8d7d1527bb2ae93c1c713b4d6e5879477157af90fc0d89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e88eaef317b2beb0c55811634e6752439b7c7725072f55a42f5ab49c2ac2385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:40Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.348662 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:40Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.361125 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wlt4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d2bf39d36b6e8998e442f36f84198a363d7f22f56a05957a4a242459e6ff8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9dxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wlt4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:40Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.379256 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72c5918a-056f-446c-b138-a1be7140a5b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7r9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:40Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.388119 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5gxjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03c9f579-21ad-4977-a5e8-db9272a08557\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7400b87694dd74f41819e23ce3f617021e028cdcbfdad65b2213249f35d176d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s6vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5gxjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:40Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.391279 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.391410 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.391475 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.391556 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.391616 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:40Z","lastTransitionTime":"2026-02-17T13:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.400759 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c1afc7-6071-464b-aedc-11234d869afe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e36c4174a69c22c608db515a3fe558ec0292c53ae56a43a945c8bb19bf4cdf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bb5bc7e6f182dd6271a6d16d8441e1f6833a32fd03858a3b91d16a896339c2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e477bf1f25d554946b3657a5bd18f922b18483902fd4f9c05a452a299cb4d920\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2260fe18b8f9f829f7dde85c9bdfc32895c8bfe48e0986c8f8205708e078ff37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:40Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.412585 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a207ffb6a17f606fde32eec5ae4222e2fdf02d79f61e4f959be55814c14daf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:40Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.424928 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b9b39aa9ac4f718b75bfeffba8e09bf2fefca68625037f565f3b237b6275db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:40Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.435255 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vx9xx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d13dbe6c-a57b-4011-9987-193ccf4939f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6637ecd7299bd61ba9b47f093048b122c90fc98990eb22155e8467c1dfd75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w7kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vx9xx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:40Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.445345 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4a1ca83-1919-4f9c-82de-c849cbd50e70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4ea9f4f4051db850a31f5e1081095664c8764e1033e81823e75cd0072d13a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nq5ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89a1061eccd396579d2995d2d4ee3bd83f65f7e06a951b4d77897e687a7dcb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nq5ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nmzvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:40Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.456847 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wxvlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eb74ba2-a87a-415f-8978-8ef706346aa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f08f2aa3f0028f17b13b73c760f1471c3f335f3ae02be305bddd63552ec7330b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2569baf4273e04fbca0b38e0a65c945ba4b2767cba6f2b6647012328c041bab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2569baf4273e04fbca0b38e0a65c945ba4b2767cba6f2b6647012328c041bab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec44770454d862d13715225217505d12ffefd3f5a8dae0912a7ba8a9fbccb0bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec44770454d862d13715225217505d12ffefd3f5a8dae0912a7ba8a9fbccb0bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d28180d5fc08486a1492239ab1107c9b3856334091da9d2ad7ae0e6086fbd00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d28180d5fc08486a1492239ab1107c9b3856334091da9d2ad7ae0e6086fbd00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79641e6aa81b861f7dd3dadd0cc23f677727909d85f017c761bcbbb19450b5e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79641e6aa81b861f7dd3dadd0cc23f677727909d85f017c761bcbbb19450b5e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f4ea02b7d74057eddb0946756236afdb129b5ad326b153d66357ee42caca78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f4ea02b7d74057eddb0946756236afdb129b5ad326b153d66357ee42caca78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcd939d8a8deab6a94b6df237bd627850f5bb37bd29bd0988d16452dcbd87c9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcd939d8a8deab6a94b6df237bd627850f5bb37bd29bd0988d16452dcbd87c9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wxvlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:40Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.469850 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b9b39aa9ac4f718b75bfeffba8e09bf2fefca68625037f565f3b237b6275db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:40Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.486491 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vx9xx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d13dbe6c-a57b-4011-9987-193ccf4939f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6637ecd7299bd61ba9b47f093048b122c90fc98990eb22155e8467c1dfd75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w7kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vx9xx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:40Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.493982 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.494005 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.494013 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.494025 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.494035 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:40Z","lastTransitionTime":"2026-02-17T13:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.501105 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4a1ca83-1919-4f9c-82de-c849cbd50e70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4ea9f4f4051db850a31f5e1081095664c8764e1033e81823e75cd0072d13a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nq5ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89a1061eccd396579d2995d2d4ee3bd83f65f7e06a951b4d77897e687a7dcb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nq5ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nmzvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:40Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.516065 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wxvlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eb74ba2-a87a-415f-8978-8ef706346aa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f08f2aa3f0028f17b13b73c760f1471c3f335f3ae02be305bddd63552ec7330b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2569baf4273e04fbca0b38e0a65c945ba4b2767cba6f2b6647012328c041bab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2569baf4273e04fbca0b38e0a65c945ba4b2767cba6f2b6647012328c041bab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec44770454d862d13715225217505d12ffefd3f5a8dae0912a7ba8a9fbccb0bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec44770454d862d13715225217505d12ffefd3f5a8dae0912a7ba8a9fbccb0bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d28180d5fc08486a1492239ab1107c9b3856334091da9d2ad7ae0e6086fbd00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d28180d5fc08486a1492239ab1107c9b3856334091da9d2ad7ae0e6086fbd00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79641e6aa81b861f7dd3dadd0cc23f677727909d85f017c761bcbbb19450b5e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79641e6aa81b861f7dd3dadd0cc23f677727909d85f017c761bcbbb19450b5e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f4ea02b7d74057eddb0946756236afdb129b5ad326b153d66357ee42caca78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f4ea02b7d74057eddb0946756236afdb129b5ad326b153d66357ee42caca78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcd939d8a8deab6a94b6df237bd627850f5bb37bd29bd0988d16452dcbd87c9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcd939d8a8deab6a94b6df237bd627850f5bb37bd29bd0988d16452dcbd87c9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wxvlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:40Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.527717 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c1afc7-6071-464b-aedc-11234d869afe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e36c4174a69c22c608db515a3fe558ec0292c53ae56a43a945c8bb19bf4cdf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bb5bc7e6f182dd6271a6d16d8441e1f6833a32fd03858a3b91d16a896339c2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e477bf1f25d554946b3657a5bd18f922b18483902fd4f9c05a452a299cb4d920\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2260fe18b8f9f829f7dde85c9bdfc32895c8bfe48e0986c8f8205708e078ff37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:40Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.541674 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a207ffb6a17f606fde32eec5ae4222e2fdf02d79f61e4f959be55814c14daf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:40Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.556017 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:40Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.568742 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e5a2da13ff704ed5e89100bd3a88a6349d446c2af48f88acb399efb43dc42d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac53462f7a94ff04dc94b5065169da7ec43985a5a7e7dba779f23614b61b78fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:40Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.580128 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:40Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.590430 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:40Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.596819 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.596845 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.596854 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.596867 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.596876 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:40Z","lastTransitionTime":"2026-02-17T13:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.601956 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wlt4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d2bf39d36b6e8998e442f36f84198a363d7f22f56a05957a4a242459e6ff8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9dxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wlt4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:40Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.621440 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72c5918a-056f-446c-b138-a1be7140a5b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0c4e266b444d8ac1a67f9b5a6705f4b871e3930daf7707cd30b589a4cce24ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a89a54d9bedad8afe45f34ce0bc2324d0ca8590c328aad9d1557b2d92e5b81fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5c74f469d2d07cfd265ee1f625b957efc0f9d1f0efd5f483c962abdfd66a080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ee6e033fa9b29f59b186e39ec83d162b40e36aab28bb1e568a54c351a8c0c79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9da7ba3cbbca328926f2372a7e581793373ec876c961f7988617c668d958b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42e357a961b833cb840c8b33ce0db65d57cc5fb37789b00a6bcd3eb67d8e33f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71def7160da0ac0cc97d8dd4f1caa950c70b5a0af4430a79fdea469b28a6b72d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fdc1a29a9f5b3be51beb5d28d7a09f671e4f9effeb9ed122da196da8623abe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7r9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:40Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.634272 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23fd4be9-debc-405d-ac05-5a5160593231\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f18bb5a144ef41de921b7b92ee560b76e3f9f9a626be4d5638db46e4688d256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84373535efe637bbd5cf8649523941f0e73ad51f06dd45c913741e2a8e7b775f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090ad37f43c1a891b1a2922b2a367e1b52ce56fd6a53e9e749f819bf281136a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://436df1e343e5ca9347dd98b9da03c74816b0bc0ec6cecd94dc3b21c38eea9b51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babf66dd8ad93003e9766e885c7626d4f6dab5bc46a6d714449a01065d7afea9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"message\\\":\\\"W0217 13:45:14.176886 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 13:45:14.177326 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771335914 cert, and key in /tmp/serving-cert-2529326345/serving-signer.crt, /tmp/serving-cert-2529326345/serving-signer.key\\\\nI0217 13:45:14.372413 1 observer_polling.go:159] Starting file observer\\\\nW0217 13:45:14.378661 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 13:45:14.378888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:45:14.382093 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529326345/tls.crt::/tmp/serving-cert-2529326345/tls.key\\\\\\\"\\\\nF0217 13:45:14.604297 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0450fd6122e12877e07084f693f67e63ac70bcecc962f7da5a32241ba14553\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:40Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.651319 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e08f74a-75d7-4dce-9297-27140e91962a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a2013b70df34194f895ea7c2e5f2650966fa2d06518733fa4411893e5b2b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbc961a623ca1f4237f238856a41acfb724aadd5fa3feda70410c352b6f750c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80bdd44d3b326c9bb61f21b221c63d4647e2cc6edc95fe568b90e76f4329cf1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9bc54d1057546b9c8d7d1527bb2ae93c1c713b4d6e5879477157af90fc0d89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e88eaef317b2beb0c55811634e6752439b7c7725072f55a42f5ab49c2ac2385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:40Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.660565 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5gxjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03c9f579-21ad-4977-a5e8-db9272a08557\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7400b87694dd74f41819e23ce3f617021e028cdcbfdad65b2213249f35d176d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s6vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5gxjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:40Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.699413 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.699443 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.699453 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.699466 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.699474 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:40Z","lastTransitionTime":"2026-02-17T13:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.801953 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.801995 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.802004 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.802017 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.802026 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:40Z","lastTransitionTime":"2026-02-17T13:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.904391 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.904444 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.904459 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.904490 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.904506 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:40Z","lastTransitionTime":"2026-02-17T13:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:40 crc kubenswrapper[4833]: I0217 13:45:40.997454 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 14:25:51.454079634 +0000 UTC Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.012160 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.012207 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.012231 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.012255 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.012270 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:41Z","lastTransitionTime":"2026-02-17T13:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.040978 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:45:41 crc kubenswrapper[4833]: E0217 13:45:41.041121 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.054120 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5gxjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03c9f579-21ad-4977-a5e8-db9272a08557\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7400b87694dd74f41819e23ce3f617021e028cdcbfdad65b2213249f35d176d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s6vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5gxjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:41Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.069728 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a207ffb6a17f606fde32eec5ae4222e2fdf02d79f61e4f959be55814c14daf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:41Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.091656 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b9b39aa9ac4f718b75bfeffba8e09bf2fefca68625037f565f3b237b6275db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:41Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.104606 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vx9xx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d13dbe6c-a57b-4011-9987-193ccf4939f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6637ecd7299bd61ba9b47f093048b122c90fc98990eb22155e8467c1dfd75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w7kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vx9xx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:41Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.114484 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.114531 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.114542 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.114557 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.114567 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:41Z","lastTransitionTime":"2026-02-17T13:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.120717 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4a1ca83-1919-4f9c-82de-c849cbd50e70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4ea9f4f4051db850a31f5e1081095664c8764e1033e81823e75cd0072d13a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nq5ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89a1061eccd396579d2995d2d4ee3bd83f65f7e06a951b4d77897e687a7dcb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nq5ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nmzvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:41Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.134283 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wxvlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eb74ba2-a87a-415f-8978-8ef706346aa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f08f2aa3f0028f17b13b73c760f1471c3f335f3ae02be305bddd63552ec7330b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2569baf4273e04fbca0b38e0a65c945ba4b2767cba6f2b6647012328c041bab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2569baf4273e04fbca0b38e0a65c945ba4b2767cba6f2b6647012328c041bab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec44770454d862d13715225217505d12ffefd3f5a8dae0912a7ba8a9fbccb0bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec44770454d862d13715225217505d12ffefd3f5a8dae0912a7ba8a9fbccb0bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d28180d5fc08486a1492239ab1107c9b3856334091da9d2ad7ae0e6086fbd00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d28180d5fc08486a1492239ab1107c9b3856334091da9d2ad7ae0e6086fbd00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79641e6aa81b861f7dd3dadd0cc23f677727909d85f017c761bcbbb19450b5e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79641e6aa81b861f7dd3dadd0cc23f677727909d85f017c761bcbbb19450b5e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f4ea02b7d74057eddb0946756236afdb129b5ad326b153d66357ee42caca78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f4ea02b7d74057eddb0946756236afdb129b5ad326b153d66357ee42caca78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcd939d8a8deab6a94b6df237bd627850f5bb37bd29bd0988d16452dcbd87c9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcd939d8a8deab6a94b6df237bd627850f5bb37bd29bd0988d16452dcbd87c9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wxvlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:41Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.144474 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c1afc7-6071-464b-aedc-11234d869afe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e36c4174a69c22c608db515a3fe558ec0292c53ae56a43a945c8bb19bf4cdf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bb5bc7e6f182dd6271a6d16d8441e1f6833a32fd03858a3b91d16a896339c2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e477bf1f25d554946b3657a5bd18f922b18483902fd4f9c05a452a299cb4d920\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2260fe18b8f9f829f7dde85c9bdfc32895c8bfe48e0986c8f8205708e078ff37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:41Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.155141 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:41Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.167513 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:41Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.179374 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e5a2da13ff704ed5e89100bd3a88a6349d446c2af48f88acb399efb43dc42d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac53462f7a94ff04dc94b5065169da7ec43985a5a7e7dba779f23614b61b78fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:41Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.191811 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:41Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.203934 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wlt4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d2bf39d36b6e8998e442f36f84198a363d7f22f56a05957a4a242459e6ff8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9dxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wlt4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:41Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.216330 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.216368 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.216377 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.216393 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.216404 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:41Z","lastTransitionTime":"2026-02-17T13:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.219688 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72c5918a-056f-446c-b138-a1be7140a5b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0c4e266b444d8ac1a67f9b5a6705f4b871e3930daf7707cd30b589a4cce24ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a89a54d9bedad8afe45f34ce0bc2324d0ca8590c328aad9d1557b2d92e5b81fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5c74f469d2d07cfd265ee1f625b957efc0f9d1f0efd5f483c962abdfd66a080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ee6e033fa9b29f59b186e39ec83d162b40e36aab28bb1e568a54c351a8c0c79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9da7ba3cbbca328926f2372a7e581793373ec876c961f7988617c668d958b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42e357a961b833cb840c8b33ce0db65d57cc5fb37789b00a6bcd3eb67d8e33f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71def7160da0ac0cc97d8dd4f1caa950c70b5a0af4430a79fdea469b28a6b72d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fdc1a29a9f5b3be51beb5d28d7a09f671e4f9effeb9ed122da196da8623abe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7r9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:41Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.232112 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23fd4be9-debc-405d-ac05-5a5160593231\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f18bb5a144ef41de921b7b92ee560b76e3f9f9a626be4d5638db46e4688d256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84373535efe637bbd5cf8649523941f0e73ad51f06dd45c913741e2a8e7b775f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090ad37f43c1a891b1a2922b2a367e1b52ce56fd6a53e9e749f819bf281136a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://436df1e343e5ca9347dd98b9da03c74816b0bc0ec6cecd94dc3b21c38eea9b51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babf66dd8ad93003e9766e885c7626d4f6dab5bc46a6d714449a01065d7afea9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"message\\\":\\\"W0217 13:45:14.176886 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 13:45:14.177326 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771335914 cert, and key in /tmp/serving-cert-2529326345/serving-signer.crt, /tmp/serving-cert-2529326345/serving-signer.key\\\\nI0217 13:45:14.372413 1 observer_polling.go:159] Starting file observer\\\\nW0217 13:45:14.378661 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 13:45:14.378888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:45:14.382093 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529326345/tls.crt::/tmp/serving-cert-2529326345/tls.key\\\\\\\"\\\\nF0217 13:45:14.604297 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0450fd6122e12877e07084f693f67e63ac70bcecc962f7da5a32241ba14553\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:41Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.235371 4833 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.235762 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.252731 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e08f74a-75d7-4dce-9297-27140e91962a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a2013b70df34194f895ea7c2e5f2650966fa2d06518733fa4411893e5b2b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbc961a623ca1f4237f238856a41acfb724aadd5fa3feda70410c352b6f750c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80bdd44d3b326c9bb61f21b221c63d4647e2cc6edc95fe568b90e76f4329cf1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9bc54d1057546b9c8d7d1527bb2ae93c1c713b4d6e5879477157af90fc0d89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e88eaef317b2beb0c55811634e6752439b7c7725072f55a42f5ab49c2ac2385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:41Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.257671 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.270362 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:41Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.283377 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wlt4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d2bf39d36b6e8998e442f36f84198a363d7f22f56a05957a4a242459e6ff8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9dxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wlt4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:41Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.304354 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72c5918a-056f-446c-b138-a1be7140a5b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0c4e266b444d8ac1a67f9b5a6705f4b871e3930daf7707cd30b589a4cce24ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a89a54d9bedad8afe45f34ce0bc2324d0ca8590c328aad9d1557b2d92e5b81fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5c74f469d2d07cfd265ee1f625b957efc0f9d1f0efd5f483c962abdfd66a080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ee6e033fa9b29f59b186e39ec83d162b40e36aab28bb1e568a54c351a8c0c79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9da7ba3cbbca328926f2372a7e581793373ec876c961f7988617c668d958b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42e357a961b833cb840c8b33ce0db65d57cc5fb37789b00a6bcd3eb67d8e33f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71def7160da0ac0cc97d8dd4f1caa950c70b5a0af4430a79fdea469b28a6b72d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fdc1a29a9f5b3be51beb5d28d7a09f671e4f9effeb9ed122da196da8623abe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7r9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:41Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.319466 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.320206 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.320225 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.320247 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.320260 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:41Z","lastTransitionTime":"2026-02-17T13:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.321205 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23fd4be9-debc-405d-ac05-5a5160593231\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f18bb5a144ef41de921b7b92ee560b76e3f9f9a626be4d5638db46e4688d256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84373535efe637bbd5cf8649523941f0e73ad51f06dd45c913741e2a8e7b775f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090ad37f43c1a891b1a2922b2a367e1b52ce56fd6a53e9e749f819bf281136a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://436df1e343e5ca9347dd98b9da03c74816b0bc0ec6cecd94dc3b21c38eea9b51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babf66dd8ad93003e9766e885c7626d4f6dab5bc46a6d714449a01065d7afea9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"message\\\":\\\"W0217 13:45:14.176886 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 13:45:14.177326 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771335914 cert, and key in /tmp/serving-cert-2529326345/serving-signer.crt, /tmp/serving-cert-2529326345/serving-signer.key\\\\nI0217 13:45:14.372413 1 observer_polling.go:159] Starting file observer\\\\nW0217 13:45:14.378661 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 13:45:14.378888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:45:14.382093 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529326345/tls.crt::/tmp/serving-cert-2529326345/tls.key\\\\\\\"\\\\nF0217 13:45:14.604297 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0450fd6122e12877e07084f693f67e63ac70bcecc962f7da5a32241ba14553\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:41Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.344104 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e08f74a-75d7-4dce-9297-27140e91962a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a2013b70df34194f895ea7c2e5f2650966fa2d06518733fa4411893e5b2b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbc961a623ca1f4237f238856a41acfb724aadd5fa3feda70410c352b6f750c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80bdd44d3b326c9bb61f21b221c63d4647e2cc6edc95fe568b90e76f4329cf1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9bc54d1057546b9c8d7d1527bb2ae93c1c713b4d6e5879477157af90fc0d89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e88eaef317b2beb0c55811634e6752439b7c7725072f55a42f5ab49c2ac2385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:41Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.355568 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5gxjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03c9f579-21ad-4977-a5e8-db9272a08557\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7400b87694dd74f41819e23ce3f617021e028cdcbfdad65b2213249f35d176d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s6vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5gxjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:41Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.370273 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a207ffb6a17f606fde32eec5ae4222e2fdf02d79f61e4f959be55814c14daf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:41Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.383096 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b9b39aa9ac4f718b75bfeffba8e09bf2fefca68625037f565f3b237b6275db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:41Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.394158 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vx9xx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d13dbe6c-a57b-4011-9987-193ccf4939f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6637ecd7299bd61ba9b47f093048b122c90fc98990eb22155e8467c1dfd75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w7kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vx9xx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:41Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.406500 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4a1ca83-1919-4f9c-82de-c849cbd50e70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4ea9f4f4051db850a31f5e1081095664c8764e1033e81823e75cd0072d13a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nq5ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89a1061eccd396579d2995d2d4ee3bd83f65f7e06a951b4d77897e687a7dcb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nq5ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nmzvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:41Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.422589 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.422802 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.422872 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.422940 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.422999 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:41Z","lastTransitionTime":"2026-02-17T13:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.424247 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wxvlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eb74ba2-a87a-415f-8978-8ef706346aa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f08f2aa3f0028f17b13b73c760f1471c3f335f3ae02be305bddd63552ec7330b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2569baf4273e04fbca0b38e0a65c945ba4b2767cba6f2b6647012328c041bab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2569baf4273e04fbca0b38e0a65c945ba4b2767cba6f2b6647012328c041bab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec44770454d862d13715225217505d12ffefd3f5a8dae0912a7ba8a9fbccb0bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec44770454d862d13715225217505d12ffefd3f5a8dae0912a7ba8a9fbccb0bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d28180d5fc08486a1492239ab1107c9b3856334091da9d2ad7ae0e6086fbd00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d28180d5fc08486a1492239ab1107c9b3856334091da9d2ad7ae0e6086fbd00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79641e6aa81b861f7dd3dadd0cc23f677727909d85f017c761bcbbb19450b5e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79641e6aa81b861f7dd3dadd0cc23f677727909d85f017c761bcbbb19450b5e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f4ea02b7d74057eddb0946756236afdb129b5ad326b153d66357ee42caca78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f4ea02b7d74057eddb0946756236afdb129b5ad326b153d66357ee42caca78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcd939d8a8deab6a94b6df237bd627850f5bb37bd29bd0988d16452dcbd87c9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcd939d8a8deab6a94b6df237bd627850f5bb37bd29bd0988d16452dcbd87c9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wxvlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:41Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.438888 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c1afc7-6071-464b-aedc-11234d869afe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e36c4174a69c22c608db515a3fe558ec0292c53ae56a43a945c8bb19bf4cdf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bb5bc7e6f182dd6271a6d16d8441e1f6833a32fd03858a3b91d16a896339c2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e477bf1f25d554946b3657a5bd18f922b18483902fd4f9c05a452a299cb4d920\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2260fe18b8f9f829f7dde85c9bdfc32895c8bfe48e0986c8f8205708e078ff37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:41Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.454199 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:41Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.470101 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:41Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.483545 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e5a2da13ff704ed5e89100bd3a88a6349d446c2af48f88acb399efb43dc42d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac53462f7a94ff04dc94b5065169da7ec43985a5a7e7dba779f23614b61b78fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:41Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.525899 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.525951 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.525963 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.525980 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.525994 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:41Z","lastTransitionTime":"2026-02-17T13:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.628770 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.628801 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.628808 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.628822 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.628831 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:41Z","lastTransitionTime":"2026-02-17T13:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.731859 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.731897 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.731906 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.731920 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.731930 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:41Z","lastTransitionTime":"2026-02-17T13:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.834410 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.834448 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.834458 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.834473 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.834484 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:41Z","lastTransitionTime":"2026-02-17T13:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.936749 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.936794 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.936808 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.936828 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.936843 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:41Z","lastTransitionTime":"2026-02-17T13:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:41 crc kubenswrapper[4833]: I0217 13:45:41.997952 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 17:20:24.390164693 +0000 UTC Feb 17 13:45:42 crc kubenswrapper[4833]: I0217 13:45:42.039537 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:42 crc kubenswrapper[4833]: I0217 13:45:42.039630 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:42 crc kubenswrapper[4833]: I0217 13:45:42.039643 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:42 crc kubenswrapper[4833]: I0217 13:45:42.039661 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:42 crc kubenswrapper[4833]: I0217 13:45:42.039672 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:42Z","lastTransitionTime":"2026-02-17T13:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:42 crc kubenswrapper[4833]: I0217 13:45:42.040887 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:45:42 crc kubenswrapper[4833]: I0217 13:45:42.040932 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:45:42 crc kubenswrapper[4833]: E0217 13:45:42.040994 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:45:42 crc kubenswrapper[4833]: E0217 13:45:42.041076 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:45:42 crc kubenswrapper[4833]: I0217 13:45:42.142143 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:42 crc kubenswrapper[4833]: I0217 13:45:42.142186 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:42 crc kubenswrapper[4833]: I0217 13:45:42.142197 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:42 crc kubenswrapper[4833]: I0217 13:45:42.142214 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:42 crc kubenswrapper[4833]: I0217 13:45:42.142224 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:42Z","lastTransitionTime":"2026-02-17T13:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:42 crc kubenswrapper[4833]: I0217 13:45:42.237907 4833 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 13:45:42 crc kubenswrapper[4833]: I0217 13:45:42.244078 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:42 crc kubenswrapper[4833]: I0217 13:45:42.244223 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:42 crc kubenswrapper[4833]: I0217 13:45:42.244311 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:42 crc kubenswrapper[4833]: I0217 13:45:42.244424 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:42 crc kubenswrapper[4833]: I0217 13:45:42.244545 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:42Z","lastTransitionTime":"2026-02-17T13:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:42 crc kubenswrapper[4833]: I0217 13:45:42.346931 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:42 crc kubenswrapper[4833]: I0217 13:45:42.347177 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:42 crc kubenswrapper[4833]: I0217 13:45:42.347249 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:42 crc kubenswrapper[4833]: I0217 13:45:42.347327 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:42 crc kubenswrapper[4833]: I0217 13:45:42.347431 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:42Z","lastTransitionTime":"2026-02-17T13:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:42 crc kubenswrapper[4833]: I0217 13:45:42.450118 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:42 crc kubenswrapper[4833]: I0217 13:45:42.450426 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:42 crc kubenswrapper[4833]: I0217 13:45:42.450673 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:42 crc kubenswrapper[4833]: I0217 13:45:42.451168 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:42 crc kubenswrapper[4833]: I0217 13:45:42.452123 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:42Z","lastTransitionTime":"2026-02-17T13:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:42 crc kubenswrapper[4833]: I0217 13:45:42.564931 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:42 crc kubenswrapper[4833]: I0217 13:45:42.564964 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:42 crc kubenswrapper[4833]: I0217 13:45:42.564976 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:42 crc kubenswrapper[4833]: I0217 13:45:42.564991 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:42 crc kubenswrapper[4833]: I0217 13:45:42.565001 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:42Z","lastTransitionTime":"2026-02-17T13:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:42 crc kubenswrapper[4833]: I0217 13:45:42.667333 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:42 crc kubenswrapper[4833]: I0217 13:45:42.667377 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:42 crc kubenswrapper[4833]: I0217 13:45:42.667386 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:42 crc kubenswrapper[4833]: I0217 13:45:42.667400 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:42 crc kubenswrapper[4833]: I0217 13:45:42.667411 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:42Z","lastTransitionTime":"2026-02-17T13:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:42 crc kubenswrapper[4833]: I0217 13:45:42.770148 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:42 crc kubenswrapper[4833]: I0217 13:45:42.770255 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:42 crc kubenswrapper[4833]: I0217 13:45:42.770279 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:42 crc kubenswrapper[4833]: I0217 13:45:42.770304 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:42 crc kubenswrapper[4833]: I0217 13:45:42.770322 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:42Z","lastTransitionTime":"2026-02-17T13:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:42 crc kubenswrapper[4833]: I0217 13:45:42.873792 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:42 crc kubenswrapper[4833]: I0217 13:45:42.873845 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:42 crc kubenswrapper[4833]: I0217 13:45:42.873867 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:42 crc kubenswrapper[4833]: I0217 13:45:42.873894 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:42 crc kubenswrapper[4833]: I0217 13:45:42.873915 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:42Z","lastTransitionTime":"2026-02-17T13:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:42 crc kubenswrapper[4833]: I0217 13:45:42.976662 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:42 crc kubenswrapper[4833]: I0217 13:45:42.976717 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:42 crc kubenswrapper[4833]: I0217 13:45:42.976729 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:42 crc kubenswrapper[4833]: I0217 13:45:42.976746 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:42 crc kubenswrapper[4833]: I0217 13:45:42.976756 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:42Z","lastTransitionTime":"2026-02-17T13:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:42 crc kubenswrapper[4833]: I0217 13:45:42.998685 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 17:11:48.670318151 +0000 UTC Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.040936 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:45:43 crc kubenswrapper[4833]: E0217 13:45:43.041200 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.078837 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.078885 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.078894 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.078910 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.078920 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:43Z","lastTransitionTime":"2026-02-17T13:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.181921 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.181978 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.181995 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.182021 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.182074 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:43Z","lastTransitionTime":"2026-02-17T13:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.242721 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7r9gt_72c5918a-056f-446c-b138-a1be7140a5b0/ovnkube-controller/0.log" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.245455 4833 generic.go:334] "Generic (PLEG): container finished" podID="72c5918a-056f-446c-b138-a1be7140a5b0" containerID="71def7160da0ac0cc97d8dd4f1caa950c70b5a0af4430a79fdea469b28a6b72d" exitCode=1 Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.245514 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" event={"ID":"72c5918a-056f-446c-b138-a1be7140a5b0","Type":"ContainerDied","Data":"71def7160da0ac0cc97d8dd4f1caa950c70b5a0af4430a79fdea469b28a6b72d"} Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.246695 4833 scope.go:117] "RemoveContainer" containerID="71def7160da0ac0cc97d8dd4f1caa950c70b5a0af4430a79fdea469b28a6b72d" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.264936 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:43Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.281865 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:43Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.285907 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.285961 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.285975 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.285997 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.286013 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:43Z","lastTransitionTime":"2026-02-17T13:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.302282 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e5a2da13ff704ed5e89100bd3a88a6349d446c2af48f88acb399efb43dc42d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac53462f7a94ff04dc94b5065169da7ec43985a5a7e7dba779f23614b61b78fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:43Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.318200 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:43Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.331889 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wlt4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d2bf39d36b6e8998e442f36f84198a363d7f22f56a05957a4a242459e6ff8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9dxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wlt4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:43Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.349528 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72c5918a-056f-446c-b138-a1be7140a5b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0c4e266b444d8ac1a67f9b5a6705f4b871e3930daf7707cd30b589a4cce24ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a89a54d9bedad8afe45f34ce0bc2324d0ca8590c328aad9d1557b2d92e5b81fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5c74f469d2d07cfd265ee1f625b957efc0f9d1f0efd5f483c962abdfd66a080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ee6e033fa9b29f59b186e39ec83d162b40e36aab28bb1e568a54c351a8c0c79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9da7ba3cbbca328926f2372a7e581793373ec876c961f7988617c668d958b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42e357a961b833cb840c8b33ce0db65d57cc5fb37789b00a6bcd3eb67d8e33f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71def7160da0ac0cc97d8dd4f1caa950c70b5a0af4430a79fdea469b28a6b72d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71def7160da0ac0cc97d8dd4f1caa950c70b5a0af4430a79fdea469b28a6b72d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T13:45:42Z\\\",\\\"message\\\":\\\"217 13:45:42.674641 6130 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0217 13:45:42.674655 6130 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 13:45:42.674685 6130 factory.go:656] Stopping watch factory\\\\nI0217 13:45:42.674710 6130 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0217 13:45:42.673711 6130 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 13:45:42.674862 6130 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 13:45:42.674881 6130 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0217 13:45:42.674884 6130 handler.go:208] Removed *v1.Node event handler 2\\\\nI0217 13:45:42.674899 6130 handler.go:208] Removed *v1.Node event handler 7\\\\nI0217 13:45:42.674906 6130 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 13:45:42.674913 6130 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0217 13:45:42.674120 6130 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 13:45:42.674171 6130 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fdc1a29a9f5b3be51beb5d28d7a09f671e4f9effeb9ed122da196da8623abe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7r9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:43Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.369940 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23fd4be9-debc-405d-ac05-5a5160593231\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f18bb5a144ef41de921b7b92ee560b76e3f9f9a626be4d5638db46e4688d256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84373535efe637bbd5cf8649523941f0e73ad51f06dd45c913741e2a8e7b775f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090ad37f43c1a891b1a2922b2a367e1b52ce56fd6a53e9e749f819bf281136a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://436df1e343e5ca9347dd98b9da03c74816b0bc0ec6cecd94dc3b21c38eea9b51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babf66dd8ad93003e9766e885c7626d4f6dab5bc46a6d714449a01065d7afea9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"message\\\":\\\"W0217 13:45:14.176886 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 13:45:14.177326 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771335914 cert, and key in /tmp/serving-cert-2529326345/serving-signer.crt, /tmp/serving-cert-2529326345/serving-signer.key\\\\nI0217 13:45:14.372413 1 observer_polling.go:159] Starting file observer\\\\nW0217 13:45:14.378661 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 13:45:14.378888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:45:14.382093 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529326345/tls.crt::/tmp/serving-cert-2529326345/tls.key\\\\\\\"\\\\nF0217 13:45:14.604297 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0450fd6122e12877e07084f693f67e63ac70bcecc962f7da5a32241ba14553\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:43Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.388515 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.388560 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.388576 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.388601 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.388631 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:43Z","lastTransitionTime":"2026-02-17T13:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.389827 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e08f74a-75d7-4dce-9297-27140e91962a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a2013b70df34194f895ea7c2e5f2650966fa2d06518733fa4411893e5b2b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbc961a623ca1f4237f238856a41acfb724aadd5fa3feda70410c352b6f750c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80bdd44d3b326c9bb61f21b221c63d4647e2cc6edc95fe568b90e76f4329cf1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9bc54d1057546b9c8d7d1527bb2ae93c1c713b4d6e5879477157af90fc0d89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e88eaef317b2beb0c55811634e6752439b7c7725072f55a42f5ab49c2ac2385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:43Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.400767 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5gxjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03c9f579-21ad-4977-a5e8-db9272a08557\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7400b87694dd74f41819e23ce3f617021e028cdcbfdad65b2213249f35d176d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s6vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5gxjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:43Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.413158 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a207ffb6a17f606fde32eec5ae4222e2fdf02d79f61e4f959be55814c14daf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:43Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.424103 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b9b39aa9ac4f718b75bfeffba8e09bf2fefca68625037f565f3b237b6275db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:43Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.448968 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vx9xx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d13dbe6c-a57b-4011-9987-193ccf4939f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6637ecd7299bd61ba9b47f093048b122c90fc98990eb22155e8467c1dfd75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w7kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vx9xx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:43Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.459019 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4a1ca83-1919-4f9c-82de-c849cbd50e70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4ea9f4f4051db850a31f5e1081095664c8764e1033e81823e75cd0072d13a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nq5ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89a1061eccd396579d2995d2d4ee3bd83f65f7e06a951b4d77897e687a7dcb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nq5ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nmzvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:43Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.473467 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wxvlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eb74ba2-a87a-415f-8978-8ef706346aa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f08f2aa3f0028f17b13b73c760f1471c3f335f3ae02be305bddd63552ec7330b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2569baf4273e04fbca0b38e0a65c945ba4b2767cba6f2b6647012328c041bab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2569baf4273e04fbca0b38e0a65c945ba4b2767cba6f2b6647012328c041bab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec44770454d862d13715225217505d12ffefd3f5a8dae0912a7ba8a9fbccb0bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec44770454d862d13715225217505d12ffefd3f5a8dae0912a7ba8a9fbccb0bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d28180d5fc08486a1492239ab1107c9b3856334091da9d2ad7ae0e6086fbd00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d28180d5fc08486a1492239ab1107c9b3856334091da9d2ad7ae0e6086fbd00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79641e6aa81b861f7dd3dadd0cc23f677727909d85f017c761bcbbb19450b5e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79641e6aa81b861f7dd3dadd0cc23f677727909d85f017c761bcbbb19450b5e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f4ea02b7d74057eddb0946756236afdb129b5ad326b153d66357ee42caca78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f4ea02b7d74057eddb0946756236afdb129b5ad326b153d66357ee42caca78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcd939d8a8deab6a94b6df237bd627850f5bb37bd29bd0988d16452dcbd87c9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcd939d8a8deab6a94b6df237bd627850f5bb37bd29bd0988d16452dcbd87c9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wxvlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:43Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.486361 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c1afc7-6071-464b-aedc-11234d869afe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e36c4174a69c22c608db515a3fe558ec0292c53ae56a43a945c8bb19bf4cdf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bb5bc7e6f182dd6271a6d16d8441e1f6833a32fd03858a3b91d16a896339c2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e477bf1f25d554946b3657a5bd18f922b18483902fd4f9c05a452a299cb4d920\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2260fe18b8f9f829f7dde85c9bdfc32895c8bfe48e0986c8f8205708e078ff37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:43Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.491374 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.491412 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.491424 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.491438 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.491447 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:43Z","lastTransitionTime":"2026-02-17T13:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.594439 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.594765 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.594965 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.595166 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.595350 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:43Z","lastTransitionTime":"2026-02-17T13:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.663672 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xwfsh"] Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.664087 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xwfsh" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.666622 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.668669 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.679909 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:43Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.691387 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e5a2da13ff704ed5e89100bd3a88a6349d446c2af48f88acb399efb43dc42d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac53462f7a94ff04dc94b5065169da7ec43985a5a7e7dba779f23614b61b78fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:43Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.698748 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.698773 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.698781 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.698794 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.698802 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:43Z","lastTransitionTime":"2026-02-17T13:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.703161 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:43Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.717495 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23fd4be9-debc-405d-ac05-5a5160593231\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f18bb5a144ef41de921b7b92ee560b76e3f9f9a626be4d5638db46e4688d256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84373535efe637bbd5cf8649523941f0e73ad51f06dd45c913741e2a8e7b775f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090ad37f43c1a891b1a2922b2a367e1b52ce56fd6a53e9e749f819bf281136a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://436df1e343e5ca9347dd98b9da03c74816b0bc0ec6cecd94dc3b21c38eea9b51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babf66dd8ad93003e9766e885c7626d4f6dab5bc46a6d714449a01065d7afea9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"message\\\":\\\"W0217 13:45:14.176886 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 13:45:14.177326 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771335914 cert, and key in /tmp/serving-cert-2529326345/serving-signer.crt, /tmp/serving-cert-2529326345/serving-signer.key\\\\nI0217 13:45:14.372413 1 observer_polling.go:159] Starting file observer\\\\nW0217 13:45:14.378661 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 13:45:14.378888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:45:14.382093 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529326345/tls.crt::/tmp/serving-cert-2529326345/tls.key\\\\\\\"\\\\nF0217 13:45:14.604297 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0450fd6122e12877e07084f693f67e63ac70bcecc962f7da5a32241ba14553\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:43Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.729175 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/962a90ec-217a-4df0-8d83-2e6663953088-env-overrides\") pod \"ovnkube-control-plane-749d76644c-xwfsh\" (UID: \"962a90ec-217a-4df0-8d83-2e6663953088\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xwfsh" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.729208 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srrv4\" (UniqueName: \"kubernetes.io/projected/962a90ec-217a-4df0-8d83-2e6663953088-kube-api-access-srrv4\") pod \"ovnkube-control-plane-749d76644c-xwfsh\" (UID: \"962a90ec-217a-4df0-8d83-2e6663953088\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xwfsh" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.729230 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/962a90ec-217a-4df0-8d83-2e6663953088-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-xwfsh\" (UID: \"962a90ec-217a-4df0-8d83-2e6663953088\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xwfsh" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.729247 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/962a90ec-217a-4df0-8d83-2e6663953088-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-xwfsh\" (UID: \"962a90ec-217a-4df0-8d83-2e6663953088\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xwfsh" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.740005 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e08f74a-75d7-4dce-9297-27140e91962a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a2013b70df34194f895ea7c2e5f2650966fa2d06518733fa4411893e5b2b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbc961a623ca1f4237f238856a41acfb724aadd5fa3feda70410c352b6f750c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80bdd44d3b326c9bb61f21b221c63d4647e2cc6edc95fe568b90e76f4329cf1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9bc54d1057546b9c8d7d1527bb2ae93c1c713b4d6e5879477157af90fc0d89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e88eaef317b2beb0c55811634e6752439b7c7725072f55a42f5ab49c2ac2385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:43Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.751730 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:43Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.765474 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wlt4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d2bf39d36b6e8998e442f36f84198a363d7f22f56a05957a4a242459e6ff8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9dxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wlt4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:43Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.788427 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72c5918a-056f-446c-b138-a1be7140a5b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0c4e266b444d8ac1a67f9b5a6705f4b871e3930daf7707cd30b589a4cce24ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a89a54d9bedad8afe45f34ce0bc2324d0ca8590c328aad9d1557b2d92e5b81fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5c74f469d2d07cfd265ee1f625b957efc0f9d1f0efd5f483c962abdfd66a080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ee6e033fa9b29f59b186e39ec83d162b40e36aab28bb1e568a54c351a8c0c79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9da7ba3cbbca328926f2372a7e581793373ec876c961f7988617c668d958b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42e357a961b833cb840c8b33ce0db65d57cc5fb37789b00a6bcd3eb67d8e33f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71def7160da0ac0cc97d8dd4f1caa950c70b5a0af4430a79fdea469b28a6b72d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71def7160da0ac0cc97d8dd4f1caa950c70b5a0af4430a79fdea469b28a6b72d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T13:45:42Z\\\",\\\"message\\\":\\\"217 13:45:42.674641 6130 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0217 13:45:42.674655 6130 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 13:45:42.674685 6130 factory.go:656] Stopping watch factory\\\\nI0217 13:45:42.674710 6130 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0217 13:45:42.673711 6130 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 13:45:42.674862 6130 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 13:45:42.674881 6130 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0217 13:45:42.674884 6130 handler.go:208] Removed *v1.Node event handler 2\\\\nI0217 13:45:42.674899 6130 handler.go:208] Removed *v1.Node event handler 7\\\\nI0217 13:45:42.674906 6130 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 13:45:42.674913 6130 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0217 13:45:42.674120 6130 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 13:45:42.674171 6130 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fdc1a29a9f5b3be51beb5d28d7a09f671e4f9effeb9ed122da196da8623abe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7r9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:43Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.798873 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5gxjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03c9f579-21ad-4977-a5e8-db9272a08557\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7400b87694dd74f41819e23ce3f617021e028cdcbfdad65b2213249f35d176d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s6vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5gxjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:43Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.801060 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.801088 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.801096 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.801109 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.801120 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:43Z","lastTransitionTime":"2026-02-17T13:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.809155 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xwfsh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"962a90ec-217a-4df0-8d83-2e6663953088\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srrv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srrv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xwfsh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:43Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.819560 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c1afc7-6071-464b-aedc-11234d869afe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e36c4174a69c22c608db515a3fe558ec0292c53ae56a43a945c8bb19bf4cdf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bb5bc7e6f182dd6271a6d16d8441e1f6833a32fd03858a3b91d16a896339c2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e477bf1f25d554946b3657a5bd18f922b18483902fd4f9c05a452a299cb4d920\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2260fe18b8f9f829f7dde85c9bdfc32895c8bfe48e0986c8f8205708e078ff37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:43Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.829926 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/962a90ec-217a-4df0-8d83-2e6663953088-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-xwfsh\" (UID: \"962a90ec-217a-4df0-8d83-2e6663953088\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xwfsh" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.830176 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/962a90ec-217a-4df0-8d83-2e6663953088-env-overrides\") pod \"ovnkube-control-plane-749d76644c-xwfsh\" (UID: \"962a90ec-217a-4df0-8d83-2e6663953088\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xwfsh" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.830219 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srrv4\" (UniqueName: \"kubernetes.io/projected/962a90ec-217a-4df0-8d83-2e6663953088-kube-api-access-srrv4\") pod \"ovnkube-control-plane-749d76644c-xwfsh\" (UID: \"962a90ec-217a-4df0-8d83-2e6663953088\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xwfsh" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.830266 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/962a90ec-217a-4df0-8d83-2e6663953088-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-xwfsh\" (UID: \"962a90ec-217a-4df0-8d83-2e6663953088\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xwfsh" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.831111 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a207ffb6a17f606fde32eec5ae4222e2fdf02d79f61e4f959be55814c14daf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:43Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.831253 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/962a90ec-217a-4df0-8d83-2e6663953088-env-overrides\") pod \"ovnkube-control-plane-749d76644c-xwfsh\" (UID: \"962a90ec-217a-4df0-8d83-2e6663953088\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xwfsh" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.831463 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/962a90ec-217a-4df0-8d83-2e6663953088-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-xwfsh\" (UID: \"962a90ec-217a-4df0-8d83-2e6663953088\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xwfsh" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.836944 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/962a90ec-217a-4df0-8d83-2e6663953088-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-xwfsh\" (UID: \"962a90ec-217a-4df0-8d83-2e6663953088\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xwfsh" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.845802 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srrv4\" (UniqueName: \"kubernetes.io/projected/962a90ec-217a-4df0-8d83-2e6663953088-kube-api-access-srrv4\") pod \"ovnkube-control-plane-749d76644c-xwfsh\" (UID: \"962a90ec-217a-4df0-8d83-2e6663953088\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xwfsh" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.848804 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b9b39aa9ac4f718b75bfeffba8e09bf2fefca68625037f565f3b237b6275db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:43Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.860874 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vx9xx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d13dbe6c-a57b-4011-9987-193ccf4939f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6637ecd7299bd61ba9b47f093048b122c90fc98990eb22155e8467c1dfd75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w7kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vx9xx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:43Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.872339 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4a1ca83-1919-4f9c-82de-c849cbd50e70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4ea9f4f4051db850a31f5e1081095664c8764e1033e81823e75cd0072d13a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nq5ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89a1061eccd396579d2995d2d4ee3bd83f65f7e06a951b4d77897e687a7dcb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nq5ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nmzvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:43Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.903027 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.903096 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.903109 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.903127 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.903139 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:43Z","lastTransitionTime":"2026-02-17T13:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.907852 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wxvlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eb74ba2-a87a-415f-8978-8ef706346aa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f08f2aa3f0028f17b13b73c760f1471c3f335f3ae02be305bddd63552ec7330b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2569baf4273e04fbca0b38e0a65c945ba4b2767cba6f2b6647012328c041bab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2569baf4273e04fbca0b38e0a65c945ba4b2767cba6f2b6647012328c041bab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec44770454d862d13715225217505d12ffefd3f5a8dae0912a7ba8a9fbccb0bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec44770454d862d13715225217505d12ffefd3f5a8dae0912a7ba8a9fbccb0bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d28180d5fc08486a1492239ab1107c9b3856334091da9d2ad7ae0e6086fbd00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d28180d5fc08486a1492239ab1107c9b3856334091da9d2ad7ae0e6086fbd00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79641e6aa81b861f7dd3dadd0cc23f677727909d85f017c761bcbbb19450b5e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79641e6aa81b861f7dd3dadd0cc23f677727909d85f017c761bcbbb19450b5e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f4ea02b7d74057eddb0946756236afdb129b5ad326b153d66357ee42caca78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f4ea02b7d74057eddb0946756236afdb129b5ad326b153d66357ee42caca78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcd939d8a8deab6a94b6df237bd627850f5bb37bd29bd0988d16452dcbd87c9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcd939d8a8deab6a94b6df237bd627850f5bb37bd29bd0988d16452dcbd87c9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wxvlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:43Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.976990 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xwfsh" Feb 17 13:45:43 crc kubenswrapper[4833]: W0217 13:45:43.995084 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod962a90ec_217a_4df0_8d83_2e6663953088.slice/crio-8acedb8fd5b61ca5b706c86b87e3c9757b241f46909883daa833571772400549 WatchSource:0}: Error finding container 8acedb8fd5b61ca5b706c86b87e3c9757b241f46909883daa833571772400549: Status 404 returned error can't find the container with id 8acedb8fd5b61ca5b706c86b87e3c9757b241f46909883daa833571772400549 Feb 17 13:45:43 crc kubenswrapper[4833]: I0217 13:45:43.999436 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 14:00:09.917525443 +0000 UTC Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.005723 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.005787 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.005795 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.005810 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.005818 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:44Z","lastTransitionTime":"2026-02-17T13:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.040782 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:45:44 crc kubenswrapper[4833]: E0217 13:45:44.040909 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.041303 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:45:44 crc kubenswrapper[4833]: E0217 13:45:44.041370 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.107714 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.107757 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.107767 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.107784 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.107795 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:44Z","lastTransitionTime":"2026-02-17T13:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.210933 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.211252 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.211265 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.211279 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.211287 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:44Z","lastTransitionTime":"2026-02-17T13:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.250354 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7r9gt_72c5918a-056f-446c-b138-a1be7140a5b0/ovnkube-controller/0.log" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.252902 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" event={"ID":"72c5918a-056f-446c-b138-a1be7140a5b0","Type":"ContainerStarted","Data":"4fd334c37221033a230de003b6ea003903bd63a0618fc37a6383705bd319bd65"} Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.253011 4833 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.254927 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xwfsh" event={"ID":"962a90ec-217a-4df0-8d83-2e6663953088","Type":"ContainerStarted","Data":"6cf73503bb300f6f71385f8da17c04a4b5d84691d5d7ca3b363b2335e4905c5a"} Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.254950 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xwfsh" event={"ID":"962a90ec-217a-4df0-8d83-2e6663953088","Type":"ContainerStarted","Data":"8acedb8fd5b61ca5b706c86b87e3c9757b241f46909883daa833571772400549"} Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.266198 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:44Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.284299 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e5a2da13ff704ed5e89100bd3a88a6349d446c2af48f88acb399efb43dc42d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac53462f7a94ff04dc94b5065169da7ec43985a5a7e7dba779f23614b61b78fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:44Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.296889 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:44Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.309746 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:44Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.313182 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.313208 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.313215 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.313228 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.313237 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:44Z","lastTransitionTime":"2026-02-17T13:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.321123 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wlt4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d2bf39d36b6e8998e442f36f84198a363d7f22f56a05957a4a242459e6ff8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9dxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wlt4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:44Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.340477 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72c5918a-056f-446c-b138-a1be7140a5b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0c4e266b444d8ac1a67f9b5a6705f4b871e3930daf7707cd30b589a4cce24ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a89a54d9bedad8afe45f34ce0bc2324d0ca8590c328aad9d1557b2d92e5b81fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5c74f469d2d07cfd265ee1f625b957efc0f9d1f0efd5f483c962abdfd66a080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ee6e033fa9b29f59b186e39ec83d162b40e36aab28bb1e568a54c351a8c0c79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9da7ba3cbbca328926f2372a7e581793373ec876c961f7988617c668d958b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42e357a961b833cb840c8b33ce0db65d57cc5fb37789b00a6bcd3eb67d8e33f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fd334c37221033a230de003b6ea003903bd63a0618fc37a6383705bd319bd65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71def7160da0ac0cc97d8dd4f1caa950c70b5a0af4430a79fdea469b28a6b72d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T13:45:42Z\\\",\\\"message\\\":\\\"217 13:45:42.674641 6130 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0217 13:45:42.674655 6130 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 13:45:42.674685 6130 factory.go:656] Stopping watch factory\\\\nI0217 13:45:42.674710 6130 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0217 13:45:42.673711 6130 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 13:45:42.674862 6130 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 13:45:42.674881 6130 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0217 13:45:42.674884 6130 handler.go:208] Removed *v1.Node event handler 2\\\\nI0217 13:45:42.674899 6130 handler.go:208] Removed *v1.Node event handler 7\\\\nI0217 13:45:42.674906 6130 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 13:45:42.674913 6130 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0217 13:45:42.674120 6130 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 13:45:42.674171 6130 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fdc1a29a9f5b3be51beb5d28d7a09f671e4f9effeb9ed122da196da8623abe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7r9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:44Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.359506 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23fd4be9-debc-405d-ac05-5a5160593231\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f18bb5a144ef41de921b7b92ee560b76e3f9f9a626be4d5638db46e4688d256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84373535efe637bbd5cf8649523941f0e73ad51f06dd45c913741e2a8e7b775f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090ad37f43c1a891b1a2922b2a367e1b52ce56fd6a53e9e749f819bf281136a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://436df1e343e5ca9347dd98b9da03c74816b0bc0ec6cecd94dc3b21c38eea9b51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babf66dd8ad93003e9766e885c7626d4f6dab5bc46a6d714449a01065d7afea9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"message\\\":\\\"W0217 13:45:14.176886 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 13:45:14.177326 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771335914 cert, and key in /tmp/serving-cert-2529326345/serving-signer.crt, /tmp/serving-cert-2529326345/serving-signer.key\\\\nI0217 13:45:14.372413 1 observer_polling.go:159] Starting file observer\\\\nW0217 13:45:14.378661 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 13:45:14.378888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:45:14.382093 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529326345/tls.crt::/tmp/serving-cert-2529326345/tls.key\\\\\\\"\\\\nF0217 13:45:14.604297 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0450fd6122e12877e07084f693f67e63ac70bcecc962f7da5a32241ba14553\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:44Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.382295 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e08f74a-75d7-4dce-9297-27140e91962a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a2013b70df34194f895ea7c2e5f2650966fa2d06518733fa4411893e5b2b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbc961a623ca1f4237f238856a41acfb724aadd5fa3feda70410c352b6f750c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80bdd44d3b326c9bb61f21b221c63d4647e2cc6edc95fe568b90e76f4329cf1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9bc54d1057546b9c8d7d1527bb2ae93c1c713b4d6e5879477157af90fc0d89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e88eaef317b2beb0c55811634e6752439b7c7725072f55a42f5ab49c2ac2385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:44Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.392488 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5gxjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03c9f579-21ad-4977-a5e8-db9272a08557\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7400b87694dd74f41819e23ce3f617021e028cdcbfdad65b2213249f35d176d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s6vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5gxjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:44Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.403027 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xwfsh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"962a90ec-217a-4df0-8d83-2e6663953088\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srrv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srrv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xwfsh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:44Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.412705 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b9b39aa9ac4f718b75bfeffba8e09bf2fefca68625037f565f3b237b6275db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:44Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.415318 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.415352 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.415362 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.415377 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.415386 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:44Z","lastTransitionTime":"2026-02-17T13:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.421927 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vx9xx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d13dbe6c-a57b-4011-9987-193ccf4939f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6637ecd7299bd61ba9b47f093048b122c90fc98990eb22155e8467c1dfd75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w7kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vx9xx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:44Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.432046 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4a1ca83-1919-4f9c-82de-c849cbd50e70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4ea9f4f4051db850a31f5e1081095664c8764e1033e81823e75cd0072d13a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nq5ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89a1061eccd396579d2995d2d4ee3bd83f65f7e06a951b4d77897e687a7dcb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nq5ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nmzvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:44Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.446087 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wxvlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eb74ba2-a87a-415f-8978-8ef706346aa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f08f2aa3f0028f17b13b73c760f1471c3f335f3ae02be305bddd63552ec7330b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2569baf4273e04fbca0b38e0a65c945ba4b2767cba6f2b6647012328c041bab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2569baf4273e04fbca0b38e0a65c945ba4b2767cba6f2b6647012328c041bab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec44770454d862d13715225217505d12ffefd3f5a8dae0912a7ba8a9fbccb0bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec44770454d862d13715225217505d12ffefd3f5a8dae0912a7ba8a9fbccb0bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d28180d5fc08486a1492239ab1107c9b3856334091da9d2ad7ae0e6086fbd00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d28180d5fc08486a1492239ab1107c9b3856334091da9d2ad7ae0e6086fbd00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79641e6aa81b861f7dd3dadd0cc23f677727909d85f017c761bcbbb19450b5e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79641e6aa81b861f7dd3dadd0cc23f677727909d85f017c761bcbbb19450b5e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f4ea02b7d74057eddb0946756236afdb129b5ad326b153d66357ee42caca78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f4ea02b7d74057eddb0946756236afdb129b5ad326b153d66357ee42caca78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcd939d8a8deab6a94b6df237bd627850f5bb37bd29bd0988d16452dcbd87c9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcd939d8a8deab6a94b6df237bd627850f5bb37bd29bd0988d16452dcbd87c9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wxvlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:44Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.459797 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c1afc7-6071-464b-aedc-11234d869afe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e36c4174a69c22c608db515a3fe558ec0292c53ae56a43a945c8bb19bf4cdf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bb5bc7e6f182dd6271a6d16d8441e1f6833a32fd03858a3b91d16a896339c2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e477bf1f25d554946b3657a5bd18f922b18483902fd4f9c05a452a299cb4d920\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2260fe18b8f9f829f7dde85c9bdfc32895c8bfe48e0986c8f8205708e078ff37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:44Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.475402 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a207ffb6a17f606fde32eec5ae4222e2fdf02d79f61e4f959be55814c14daf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:44Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.518114 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.518150 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.518158 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.518172 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.518184 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:44Z","lastTransitionTime":"2026-02-17T13:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.620881 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.620929 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.620946 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.620969 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.620986 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:44Z","lastTransitionTime":"2026-02-17T13:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.722916 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.722960 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.722968 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.722984 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.722994 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:44Z","lastTransitionTime":"2026-02-17T13:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.767562 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-4b7xf"] Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.768010 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b7xf" Feb 17 13:45:44 crc kubenswrapper[4833]: E0217 13:45:44.768092 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b7xf" podUID="892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.780358 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5gxjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03c9f579-21ad-4977-a5e8-db9272a08557\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7400b87694dd74f41819e23ce3f617021e028cdcbfdad65b2213249f35d176d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s6vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5gxjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:44Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.795388 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xwfsh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"962a90ec-217a-4df0-8d83-2e6663953088\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srrv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srrv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xwfsh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:44Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.806293 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4b7xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft275\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft275\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4b7xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:44Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.819950 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b9b39aa9ac4f718b75bfeffba8e09bf2fefca68625037f565f3b237b6275db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:44Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.825020 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.825067 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.825079 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.825091 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.825100 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:44Z","lastTransitionTime":"2026-02-17T13:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.831677 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vx9xx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d13dbe6c-a57b-4011-9987-193ccf4939f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6637ecd7299bd61ba9b47f093048b122c90fc98990eb22155e8467c1dfd75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w7kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vx9xx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:44Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.838971 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c-metrics-certs\") pod \"network-metrics-daemon-4b7xf\" (UID: \"892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c\") " pod="openshift-multus/network-metrics-daemon-4b7xf" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.839030 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft275\" (UniqueName: \"kubernetes.io/projected/892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c-kube-api-access-ft275\") pod \"network-metrics-daemon-4b7xf\" (UID: \"892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c\") " pod="openshift-multus/network-metrics-daemon-4b7xf" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.842709 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4a1ca83-1919-4f9c-82de-c849cbd50e70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4ea9f4f4051db850a31f5e1081095664c8764e1033e81823e75cd0072d13a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nq5ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89a1061eccd396579d2995d2d4ee3bd83f65f7e06a951b4d77897e687a7dcb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nq5ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nmzvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:44Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.869156 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wxvlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eb74ba2-a87a-415f-8978-8ef706346aa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f08f2aa3f0028f17b13b73c760f1471c3f335f3ae02be305bddd63552ec7330b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2569baf4273e04fbca0b38e0a65c945ba4b2767cba6f2b6647012328c041bab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2569baf4273e04fbca0b38e0a65c945ba4b2767cba6f2b6647012328c041bab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec44770454d862d13715225217505d12ffefd3f5a8dae0912a7ba8a9fbccb0bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec44770454d862d13715225217505d12ffefd3f5a8dae0912a7ba8a9fbccb0bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d28180d5fc08486a1492239ab1107c9b3856334091da9d2ad7ae0e6086fbd00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d28180d5fc08486a1492239ab1107c9b3856334091da9d2ad7ae0e6086fbd00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79641e6aa81b861f7dd3dadd0cc23f677727909d85f017c761bcbbb19450b5e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79641e6aa81b861f7dd3dadd0cc23f677727909d85f017c761bcbbb19450b5e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f4ea02b7d74057eddb0946756236afdb129b5ad326b153d66357ee42caca78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f4ea02b7d74057eddb0946756236afdb129b5ad326b153d66357ee42caca78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcd939d8a8deab6a94b6df237bd627850f5bb37bd29bd0988d16452dcbd87c9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcd939d8a8deab6a94b6df237bd627850f5bb37bd29bd0988d16452dcbd87c9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wxvlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:44Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.884076 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c1afc7-6071-464b-aedc-11234d869afe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e36c4174a69c22c608db515a3fe558ec0292c53ae56a43a945c8bb19bf4cdf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bb5bc7e6f182dd6271a6d16d8441e1f6833a32fd03858a3b91d16a896339c2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e477bf1f25d554946b3657a5bd18f922b18483902fd4f9c05a452a299cb4d920\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2260fe18b8f9f829f7dde85c9bdfc32895c8bfe48e0986c8f8205708e078ff37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:44Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.896531 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a207ffb6a17f606fde32eec5ae4222e2fdf02d79f61e4f959be55814c14daf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:44Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.911745 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:44Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.923895 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e5a2da13ff704ed5e89100bd3a88a6349d446c2af48f88acb399efb43dc42d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac53462f7a94ff04dc94b5065169da7ec43985a5a7e7dba779f23614b61b78fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:44Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.927908 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.927952 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.927962 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.927980 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.927990 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:44Z","lastTransitionTime":"2026-02-17T13:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.937023 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:44Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.939381 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ft275\" (UniqueName: \"kubernetes.io/projected/892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c-kube-api-access-ft275\") pod \"network-metrics-daemon-4b7xf\" (UID: \"892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c\") " pod="openshift-multus/network-metrics-daemon-4b7xf" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.939415 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c-metrics-certs\") pod \"network-metrics-daemon-4b7xf\" (UID: \"892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c\") " pod="openshift-multus/network-metrics-daemon-4b7xf" Feb 17 13:45:44 crc kubenswrapper[4833]: E0217 13:45:44.939537 4833 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 13:45:44 crc kubenswrapper[4833]: E0217 13:45:44.939601 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c-metrics-certs podName:892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c nodeName:}" failed. No retries permitted until 2026-02-17 13:45:45.439585276 +0000 UTC m=+35.074684709 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c-metrics-certs") pod "network-metrics-daemon-4b7xf" (UID: "892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.949106 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:44Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.955266 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ft275\" (UniqueName: \"kubernetes.io/projected/892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c-kube-api-access-ft275\") pod \"network-metrics-daemon-4b7xf\" (UID: \"892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c\") " pod="openshift-multus/network-metrics-daemon-4b7xf" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.962720 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wlt4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d2bf39d36b6e8998e442f36f84198a363d7f22f56a05957a4a242459e6ff8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9dxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wlt4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:44Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:44 crc kubenswrapper[4833]: I0217 13:45:44.986535 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72c5918a-056f-446c-b138-a1be7140a5b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0c4e266b444d8ac1a67f9b5a6705f4b871e3930daf7707cd30b589a4cce24ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a89a54d9bedad8afe45f34ce0bc2324d0ca8590c328aad9d1557b2d92e5b81fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5c74f469d2d07cfd265ee1f625b957efc0f9d1f0efd5f483c962abdfd66a080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ee6e033fa9b29f59b186e39ec83d162b40e36aab28bb1e568a54c351a8c0c79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9da7ba3cbbca328926f2372a7e581793373ec876c961f7988617c668d958b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42e357a961b833cb840c8b33ce0db65d57cc5fb37789b00a6bcd3eb67d8e33f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fd334c37221033a230de003b6ea003903bd63a0618fc37a6383705bd319bd65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71def7160da0ac0cc97d8dd4f1caa950c70b5a0af4430a79fdea469b28a6b72d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T13:45:42Z\\\",\\\"message\\\":\\\"217 13:45:42.674641 6130 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0217 13:45:42.674655 6130 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 13:45:42.674685 6130 factory.go:656] Stopping watch factory\\\\nI0217 13:45:42.674710 6130 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0217 13:45:42.673711 6130 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 13:45:42.674862 6130 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 13:45:42.674881 6130 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0217 13:45:42.674884 6130 handler.go:208] Removed *v1.Node event handler 2\\\\nI0217 13:45:42.674899 6130 handler.go:208] Removed *v1.Node event handler 7\\\\nI0217 13:45:42.674906 6130 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 13:45:42.674913 6130 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0217 13:45:42.674120 6130 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 13:45:42.674171 6130 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fdc1a29a9f5b3be51beb5d28d7a09f671e4f9effeb9ed122da196da8623abe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7r9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:44Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.000334 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 13:35:22.316327593 +0000 UTC Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.008715 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23fd4be9-debc-405d-ac05-5a5160593231\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f18bb5a144ef41de921b7b92ee560b76e3f9f9a626be4d5638db46e4688d256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84373535efe637bbd5cf8649523941f0e73ad51f06dd45c913741e2a8e7b775f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090ad37f43c1a891b1a2922b2a367e1b52ce56fd6a53e9e749f819bf281136a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://436df1e343e5ca9347dd98b9da03c74816b0bc0ec6cecd94dc3b21c38eea9b51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babf66dd8ad93003e9766e885c7626d4f6dab5bc46a6d714449a01065d7afea9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"message\\\":\\\"W0217 13:45:14.176886 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 13:45:14.177326 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771335914 cert, and key in /tmp/serving-cert-2529326345/serving-signer.crt, /tmp/serving-cert-2529326345/serving-signer.key\\\\nI0217 13:45:14.372413 1 observer_polling.go:159] Starting file observer\\\\nW0217 13:45:14.378661 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 13:45:14.378888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:45:14.382093 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529326345/tls.crt::/tmp/serving-cert-2529326345/tls.key\\\\\\\"\\\\nF0217 13:45:14.604297 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0450fd6122e12877e07084f693f67e63ac70bcecc962f7da5a32241ba14553\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:45Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.030285 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.030323 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.030333 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.030347 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.030356 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:45Z","lastTransitionTime":"2026-02-17T13:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.036842 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e08f74a-75d7-4dce-9297-27140e91962a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a2013b70df34194f895ea7c2e5f2650966fa2d06518733fa4411893e5b2b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbc961a623ca1f4237f238856a41acfb724aadd5fa3feda70410c352b6f750c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80bdd44d3b326c9bb61f21b221c63d4647e2cc6edc95fe568b90e76f4329cf1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9bc54d1057546b9c8d7d1527bb2ae93c1c713b4d6e5879477157af90fc0d89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e88eaef317b2beb0c55811634e6752439b7c7725072f55a42f5ab49c2ac2385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:45Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.040561 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:45:45 crc kubenswrapper[4833]: E0217 13:45:45.040668 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.132885 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.132930 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.132942 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.132963 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.132976 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:45Z","lastTransitionTime":"2026-02-17T13:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.235858 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.235907 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.235918 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.235935 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.235947 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:45Z","lastTransitionTime":"2026-02-17T13:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.259363 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7r9gt_72c5918a-056f-446c-b138-a1be7140a5b0/ovnkube-controller/1.log" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.259906 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7r9gt_72c5918a-056f-446c-b138-a1be7140a5b0/ovnkube-controller/0.log" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.262531 4833 generic.go:334] "Generic (PLEG): container finished" podID="72c5918a-056f-446c-b138-a1be7140a5b0" containerID="4fd334c37221033a230de003b6ea003903bd63a0618fc37a6383705bd319bd65" exitCode=1 Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.262603 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" event={"ID":"72c5918a-056f-446c-b138-a1be7140a5b0","Type":"ContainerDied","Data":"4fd334c37221033a230de003b6ea003903bd63a0618fc37a6383705bd319bd65"} Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.262651 4833 scope.go:117] "RemoveContainer" containerID="71def7160da0ac0cc97d8dd4f1caa950c70b5a0af4430a79fdea469b28a6b72d" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.263656 4833 scope.go:117] "RemoveContainer" containerID="4fd334c37221033a230de003b6ea003903bd63a0618fc37a6383705bd319bd65" Feb 17 13:45:45 crc kubenswrapper[4833]: E0217 13:45:45.263911 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-7r9gt_openshift-ovn-kubernetes(72c5918a-056f-446c-b138-a1be7140a5b0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" podUID="72c5918a-056f-446c-b138-a1be7140a5b0" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.265154 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xwfsh" event={"ID":"962a90ec-217a-4df0-8d83-2e6663953088","Type":"ContainerStarted","Data":"4dc181ab232df3bc23da46eabfb43d64f7c4e674152c27338d8ff5faf1c477bf"} Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.280884 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a207ffb6a17f606fde32eec5ae4222e2fdf02d79f61e4f959be55814c14daf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:45Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.294888 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b9b39aa9ac4f718b75bfeffba8e09bf2fefca68625037f565f3b237b6275db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:45Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.305009 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vx9xx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d13dbe6c-a57b-4011-9987-193ccf4939f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6637ecd7299bd61ba9b47f093048b122c90fc98990eb22155e8467c1dfd75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w7kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vx9xx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:45Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.316608 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4a1ca83-1919-4f9c-82de-c849cbd50e70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4ea9f4f4051db850a31f5e1081095664c8764e1033e81823e75cd0072d13a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nq5ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89a1061eccd396579d2995d2d4ee3bd83f65f7e06a951b4d77897e687a7dcb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nq5ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nmzvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:45Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.334272 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wxvlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eb74ba2-a87a-415f-8978-8ef706346aa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f08f2aa3f0028f17b13b73c760f1471c3f335f3ae02be305bddd63552ec7330b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2569baf4273e04fbca0b38e0a65c945ba4b2767cba6f2b6647012328c041bab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2569baf4273e04fbca0b38e0a65c945ba4b2767cba6f2b6647012328c041bab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec44770454d862d13715225217505d12ffefd3f5a8dae0912a7ba8a9fbccb0bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec44770454d862d13715225217505d12ffefd3f5a8dae0912a7ba8a9fbccb0bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d28180d5fc08486a1492239ab1107c9b3856334091da9d2ad7ae0e6086fbd00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d28180d5fc08486a1492239ab1107c9b3856334091da9d2ad7ae0e6086fbd00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79641e6aa81b861f7dd3dadd0cc23f677727909d85f017c761bcbbb19450b5e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79641e6aa81b861f7dd3dadd0cc23f677727909d85f017c761bcbbb19450b5e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f4ea02b7d74057eddb0946756236afdb129b5ad326b153d66357ee42caca78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f4ea02b7d74057eddb0946756236afdb129b5ad326b153d66357ee42caca78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcd939d8a8deab6a94b6df237bd627850f5bb37bd29bd0988d16452dcbd87c9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcd939d8a8deab6a94b6df237bd627850f5bb37bd29bd0988d16452dcbd87c9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wxvlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:45Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.339302 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.339402 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.339459 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.339519 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.339610 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:45Z","lastTransitionTime":"2026-02-17T13:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.347871 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c1afc7-6071-464b-aedc-11234d869afe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e36c4174a69c22c608db515a3fe558ec0292c53ae56a43a945c8bb19bf4cdf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bb5bc7e6f182dd6271a6d16d8441e1f6833a32fd03858a3b91d16a896339c2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e477bf1f25d554946b3657a5bd18f922b18483902fd4f9c05a452a299cb4d920\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2260fe18b8f9f829f7dde85c9bdfc32895c8bfe48e0986c8f8205708e078ff37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:45Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.359572 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:45Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.373396 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:45Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.384649 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e5a2da13ff704ed5e89100bd3a88a6349d446c2af48f88acb399efb43dc42d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac53462f7a94ff04dc94b5065169da7ec43985a5a7e7dba779f23614b61b78fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:45Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.403516 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:45Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.424181 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wlt4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d2bf39d36b6e8998e442f36f84198a363d7f22f56a05957a4a242459e6ff8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9dxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wlt4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:45Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.442523 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.442578 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.442597 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.442622 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.442640 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:45Z","lastTransitionTime":"2026-02-17T13:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.445233 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c-metrics-certs\") pod \"network-metrics-daemon-4b7xf\" (UID: \"892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c\") " pod="openshift-multus/network-metrics-daemon-4b7xf" Feb 17 13:45:45 crc kubenswrapper[4833]: E0217 13:45:45.445417 4833 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 13:45:45 crc kubenswrapper[4833]: E0217 13:45:45.445481 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c-metrics-certs podName:892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c nodeName:}" failed. No retries permitted until 2026-02-17 13:45:46.445465336 +0000 UTC m=+36.080564779 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c-metrics-certs") pod "network-metrics-daemon-4b7xf" (UID: "892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.446644 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72c5918a-056f-446c-b138-a1be7140a5b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0c4e266b444d8ac1a67f9b5a6705f4b871e3930daf7707cd30b589a4cce24ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a89a54d9bedad8afe45f34ce0bc2324d0ca8590c328aad9d1557b2d92e5b81fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5c74f469d2d07cfd265ee1f625b957efc0f9d1f0efd5f483c962abdfd66a080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ee6e033fa9b29f59b186e39ec83d162b40e36aab28bb1e568a54c351a8c0c79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9da7ba3cbbca328926f2372a7e581793373ec876c961f7988617c668d958b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42e357a961b833cb840c8b33ce0db65d57cc5fb37789b00a6bcd3eb67d8e33f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fd334c37221033a230de003b6ea003903bd63a0618fc37a6383705bd319bd65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71def7160da0ac0cc97d8dd4f1caa950c70b5a0af4430a79fdea469b28a6b72d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T13:45:42Z\\\",\\\"message\\\":\\\"217 13:45:42.674641 6130 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0217 13:45:42.674655 6130 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 13:45:42.674685 6130 factory.go:656] Stopping watch factory\\\\nI0217 13:45:42.674710 6130 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0217 13:45:42.673711 6130 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 13:45:42.674862 6130 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 13:45:42.674881 6130 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0217 13:45:42.674884 6130 handler.go:208] Removed *v1.Node event handler 2\\\\nI0217 13:45:42.674899 6130 handler.go:208] Removed *v1.Node event handler 7\\\\nI0217 13:45:42.674906 6130 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 13:45:42.674913 6130 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0217 13:45:42.674120 6130 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 13:45:42.674171 6130 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fd334c37221033a230de003b6ea003903bd63a0618fc37a6383705bd319bd65\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"message\\\":\\\"enshift-authentication-operator/metrics]} name:Service_openshift-authentication-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.150:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6ea1fd71-2b40-4361-92ee-3f1ab4ec7414}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 13:45:44.455563 6278 services_controller.go:443] Built service openshift-machine-api/machine-api-operator-machine-webhook LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.250\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0217 13:45:44.455576 6278 services_controller.go:444] Built service openshift-machine-api/machine-api-operator-machine-webhook LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0217 13:45:44.455585 6278 services_controller.go:445] Built service openshift-machine-api/machine-api-operator-machine-webhook LB template configs for network=default: []services.\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fdc1a29a9f5b3be51beb5d28d7a09f671e4f9effeb9ed122da196da8623abe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7r9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:45Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.470568 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23fd4be9-debc-405d-ac05-5a5160593231\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f18bb5a144ef41de921b7b92ee560b76e3f9f9a626be4d5638db46e4688d256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84373535efe637bbd5cf8649523941f0e73ad51f06dd45c913741e2a8e7b775f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090ad37f43c1a891b1a2922b2a367e1b52ce56fd6a53e9e749f819bf281136a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://436df1e343e5ca9347dd98b9da03c74816b0bc0ec6cecd94dc3b21c38eea9b51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babf66dd8ad93003e9766e885c7626d4f6dab5bc46a6d714449a01065d7afea9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"message\\\":\\\"W0217 13:45:14.176886 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 13:45:14.177326 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771335914 cert, and key in /tmp/serving-cert-2529326345/serving-signer.crt, /tmp/serving-cert-2529326345/serving-signer.key\\\\nI0217 13:45:14.372413 1 observer_polling.go:159] Starting file observer\\\\nW0217 13:45:14.378661 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 13:45:14.378888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:45:14.382093 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529326345/tls.crt::/tmp/serving-cert-2529326345/tls.key\\\\\\\"\\\\nF0217 13:45:14.604297 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0450fd6122e12877e07084f693f67e63ac70bcecc962f7da5a32241ba14553\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:45Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.496584 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e08f74a-75d7-4dce-9297-27140e91962a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a2013b70df34194f895ea7c2e5f2650966fa2d06518733fa4411893e5b2b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbc961a623ca1f4237f238856a41acfb724aadd5fa3feda70410c352b6f750c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80bdd44d3b326c9bb61f21b221c63d4647e2cc6edc95fe568b90e76f4329cf1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9bc54d1057546b9c8d7d1527bb2ae93c1c713b4d6e5879477157af90fc0d89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e88eaef317b2beb0c55811634e6752439b7c7725072f55a42f5ab49c2ac2385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:45Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.509824 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4b7xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft275\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft275\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4b7xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:45Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.524697 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5gxjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03c9f579-21ad-4977-a5e8-db9272a08557\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7400b87694dd74f41819e23ce3f617021e028cdcbfdad65b2213249f35d176d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s6vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5gxjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:45Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.535360 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xwfsh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"962a90ec-217a-4df0-8d83-2e6663953088\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srrv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srrv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xwfsh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:45Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.544612 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.544643 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.544653 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.544666 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.544676 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:45Z","lastTransitionTime":"2026-02-17T13:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.546916 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b9b39aa9ac4f718b75bfeffba8e09bf2fefca68625037f565f3b237b6275db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:45Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.555641 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vx9xx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d13dbe6c-a57b-4011-9987-193ccf4939f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6637ecd7299bd61ba9b47f093048b122c90fc98990eb22155e8467c1dfd75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w7kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vx9xx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:45Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.568558 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4a1ca83-1919-4f9c-82de-c849cbd50e70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4ea9f4f4051db850a31f5e1081095664c8764e1033e81823e75cd0072d13a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nq5ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89a1061eccd396579d2995d2d4ee3bd83f65f7e06a951b4d77897e687a7dcb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nq5ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nmzvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:45Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.581033 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wxvlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eb74ba2-a87a-415f-8978-8ef706346aa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f08f2aa3f0028f17b13b73c760f1471c3f335f3ae02be305bddd63552ec7330b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2569baf4273e04fbca0b38e0a65c945ba4b2767cba6f2b6647012328c041bab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2569baf4273e04fbca0b38e0a65c945ba4b2767cba6f2b6647012328c041bab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec44770454d862d13715225217505d12ffefd3f5a8dae0912a7ba8a9fbccb0bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec44770454d862d13715225217505d12ffefd3f5a8dae0912a7ba8a9fbccb0bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d28180d5fc08486a1492239ab1107c9b3856334091da9d2ad7ae0e6086fbd00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d28180d5fc08486a1492239ab1107c9b3856334091da9d2ad7ae0e6086fbd00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79641e6aa81b861f7dd3dadd0cc23f677727909d85f017c761bcbbb19450b5e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79641e6aa81b861f7dd3dadd0cc23f677727909d85f017c761bcbbb19450b5e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f4ea02b7d74057eddb0946756236afdb129b5ad326b153d66357ee42caca78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f4ea02b7d74057eddb0946756236afdb129b5ad326b153d66357ee42caca78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcd939d8a8deab6a94b6df237bd627850f5bb37bd29bd0988d16452dcbd87c9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcd939d8a8deab6a94b6df237bd627850f5bb37bd29bd0988d16452dcbd87c9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wxvlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:45Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.594300 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c1afc7-6071-464b-aedc-11234d869afe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e36c4174a69c22c608db515a3fe558ec0292c53ae56a43a945c8bb19bf4cdf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bb5bc7e6f182dd6271a6d16d8441e1f6833a32fd03858a3b91d16a896339c2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e477bf1f25d554946b3657a5bd18f922b18483902fd4f9c05a452a299cb4d920\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2260fe18b8f9f829f7dde85c9bdfc32895c8bfe48e0986c8f8205708e078ff37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:45Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.611378 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a207ffb6a17f606fde32eec5ae4222e2fdf02d79f61e4f959be55814c14daf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:45Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.624826 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:45Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.641447 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e5a2da13ff704ed5e89100bd3a88a6349d446c2af48f88acb399efb43dc42d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac53462f7a94ff04dc94b5065169da7ec43985a5a7e7dba779f23614b61b78fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:45Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.646843 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.646877 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.646888 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.646902 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.646912 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:45Z","lastTransitionTime":"2026-02-17T13:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.654280 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:45Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.666867 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:45Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.679672 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wlt4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d2bf39d36b6e8998e442f36f84198a363d7f22f56a05957a4a242459e6ff8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9dxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wlt4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:45Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.706418 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72c5918a-056f-446c-b138-a1be7140a5b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0c4e266b444d8ac1a67f9b5a6705f4b871e3930daf7707cd30b589a4cce24ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a89a54d9bedad8afe45f34ce0bc2324d0ca8590c328aad9d1557b2d92e5b81fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5c74f469d2d07cfd265ee1f625b957efc0f9d1f0efd5f483c962abdfd66a080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ee6e033fa9b29f59b186e39ec83d162b40e36aab28bb1e568a54c351a8c0c79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9da7ba3cbbca328926f2372a7e581793373ec876c961f7988617c668d958b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42e357a961b833cb840c8b33ce0db65d57cc5fb37789b00a6bcd3eb67d8e33f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fd334c37221033a230de003b6ea003903bd63a0618fc37a6383705bd319bd65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71def7160da0ac0cc97d8dd4f1caa950c70b5a0af4430a79fdea469b28a6b72d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T13:45:42Z\\\",\\\"message\\\":\\\"217 13:45:42.674641 6130 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0217 13:45:42.674655 6130 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 13:45:42.674685 6130 factory.go:656] Stopping watch factory\\\\nI0217 13:45:42.674710 6130 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0217 13:45:42.673711 6130 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 13:45:42.674862 6130 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 13:45:42.674881 6130 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0217 13:45:42.674884 6130 handler.go:208] Removed *v1.Node event handler 2\\\\nI0217 13:45:42.674899 6130 handler.go:208] Removed *v1.Node event handler 7\\\\nI0217 13:45:42.674906 6130 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 13:45:42.674913 6130 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0217 13:45:42.674120 6130 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 13:45:42.674171 6130 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fd334c37221033a230de003b6ea003903bd63a0618fc37a6383705bd319bd65\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"message\\\":\\\"enshift-authentication-operator/metrics]} name:Service_openshift-authentication-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.150:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6ea1fd71-2b40-4361-92ee-3f1ab4ec7414}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 13:45:44.455563 6278 services_controller.go:443] Built service openshift-machine-api/machine-api-operator-machine-webhook LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.250\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0217 13:45:44.455576 6278 services_controller.go:444] Built service openshift-machine-api/machine-api-operator-machine-webhook LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0217 13:45:44.455585 6278 services_controller.go:445] Built service openshift-machine-api/machine-api-operator-machine-webhook LB template configs for network=default: []services.\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fdc1a29a9f5b3be51beb5d28d7a09f671e4f9effeb9ed122da196da8623abe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7r9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:45Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.720884 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23fd4be9-debc-405d-ac05-5a5160593231\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f18bb5a144ef41de921b7b92ee560b76e3f9f9a626be4d5638db46e4688d256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84373535efe637bbd5cf8649523941f0e73ad51f06dd45c913741e2a8e7b775f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090ad37f43c1a891b1a2922b2a367e1b52ce56fd6a53e9e749f819bf281136a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://436df1e343e5ca9347dd98b9da03c74816b0bc0ec6cecd94dc3b21c38eea9b51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babf66dd8ad93003e9766e885c7626d4f6dab5bc46a6d714449a01065d7afea9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"message\\\":\\\"W0217 13:45:14.176886 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 13:45:14.177326 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771335914 cert, and key in /tmp/serving-cert-2529326345/serving-signer.crt, /tmp/serving-cert-2529326345/serving-signer.key\\\\nI0217 13:45:14.372413 1 observer_polling.go:159] Starting file observer\\\\nW0217 13:45:14.378661 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 13:45:14.378888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:45:14.382093 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529326345/tls.crt::/tmp/serving-cert-2529326345/tls.key\\\\\\\"\\\\nF0217 13:45:14.604297 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0450fd6122e12877e07084f693f67e63ac70bcecc962f7da5a32241ba14553\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:45Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.740512 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e08f74a-75d7-4dce-9297-27140e91962a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a2013b70df34194f895ea7c2e5f2650966fa2d06518733fa4411893e5b2b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbc961a623ca1f4237f238856a41acfb724aadd5fa3feda70410c352b6f750c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80bdd44d3b326c9bb61f21b221c63d4647e2cc6edc95fe568b90e76f4329cf1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9bc54d1057546b9c8d7d1527bb2ae93c1c713b4d6e5879477157af90fc0d89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e88eaef317b2beb0c55811634e6752439b7c7725072f55a42f5ab49c2ac2385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:45Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.746838 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:45:45 crc kubenswrapper[4833]: E0217 13:45:45.747116 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:46:01.74707306 +0000 UTC m=+51.382172533 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.748735 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.748791 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.748809 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.748832 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.748849 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:45Z","lastTransitionTime":"2026-02-17T13:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.753175 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5gxjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03c9f579-21ad-4977-a5e8-db9272a08557\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7400b87694dd74f41819e23ce3f617021e028cdcbfdad65b2213249f35d176d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s6vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5gxjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:45Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.765231 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xwfsh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"962a90ec-217a-4df0-8d83-2e6663953088\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cf73503bb300f6f71385f8da17c04a4b5d84691d5d7ca3b363b2335e4905c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srrv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc181ab232df3bc23da46eabfb43d64f7c4e674152c27338d8ff5faf1c477bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srrv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xwfsh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:45Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.777428 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4b7xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft275\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft275\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4b7xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:45Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.847488 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.847762 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:45:45 crc kubenswrapper[4833]: E0217 13:45:45.847668 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 13:45:45 crc kubenswrapper[4833]: E0217 13:45:45.847917 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 13:45:45 crc kubenswrapper[4833]: E0217 13:45:45.847930 4833 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.847875 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:45:45 crc kubenswrapper[4833]: E0217 13:45:45.847978 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 13:46:01.847963284 +0000 UTC m=+51.483062717 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 13:45:45 crc kubenswrapper[4833]: E0217 13:45:45.847866 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.848072 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:45:45 crc kubenswrapper[4833]: E0217 13:45:45.848101 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 13:45:45 crc kubenswrapper[4833]: E0217 13:45:45.848124 4833 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 13:45:45 crc kubenswrapper[4833]: E0217 13:45:45.848196 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 13:46:01.848177499 +0000 UTC m=+51.483276932 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 13:45:45 crc kubenswrapper[4833]: E0217 13:45:45.848209 4833 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 13:45:45 crc kubenswrapper[4833]: E0217 13:45:45.848274 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 13:46:01.848257221 +0000 UTC m=+51.483356654 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 13:45:45 crc kubenswrapper[4833]: E0217 13:45:45.848409 4833 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 13:45:45 crc kubenswrapper[4833]: E0217 13:45:45.848501 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 13:46:01.848491676 +0000 UTC m=+51.483591109 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.850957 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.851089 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.851196 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.851287 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.851363 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:45Z","lastTransitionTime":"2026-02-17T13:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.953789 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.953820 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.953830 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.953845 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:45 crc kubenswrapper[4833]: I0217 13:45:45.953855 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:45Z","lastTransitionTime":"2026-02-17T13:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.002085 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 01:06:08.135574254 +0000 UTC Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.041485 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b7xf" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.041556 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:45:46 crc kubenswrapper[4833]: E0217 13:45:46.041628 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b7xf" podUID="892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c" Feb 17 13:45:46 crc kubenswrapper[4833]: E0217 13:45:46.041704 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.041482 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:45:46 crc kubenswrapper[4833]: E0217 13:45:46.041805 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.056292 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.056340 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.056356 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.056388 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.056404 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:46Z","lastTransitionTime":"2026-02-17T13:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.158429 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.158467 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.158477 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.158492 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.158502 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:46Z","lastTransitionTime":"2026-02-17T13:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.261370 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.261443 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.261467 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.261492 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.261510 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:46Z","lastTransitionTime":"2026-02-17T13:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.273573 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7r9gt_72c5918a-056f-446c-b138-a1be7140a5b0/ovnkube-controller/1.log" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.278918 4833 scope.go:117] "RemoveContainer" containerID="4fd334c37221033a230de003b6ea003903bd63a0618fc37a6383705bd319bd65" Feb 17 13:45:46 crc kubenswrapper[4833]: E0217 13:45:46.279160 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-7r9gt_openshift-ovn-kubernetes(72c5918a-056f-446c-b138-a1be7140a5b0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" podUID="72c5918a-056f-446c-b138-a1be7140a5b0" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.304279 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e08f74a-75d7-4dce-9297-27140e91962a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a2013b70df34194f895ea7c2e5f2650966fa2d06518733fa4411893e5b2b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbc961a623ca1f4237f238856a41acfb724aadd5fa3feda70410c352b6f750c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80bdd44d3b326c9bb61f21b221c63d4647e2cc6edc95fe568b90e76f4329cf1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9bc54d1057546b9c8d7d1527bb2ae93c1c713b4d6e5879477157af90fc0d89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e88eaef317b2beb0c55811634e6752439b7c7725072f55a42f5ab49c2ac2385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:46Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.318155 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:46Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.334160 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wlt4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d2bf39d36b6e8998e442f36f84198a363d7f22f56a05957a4a242459e6ff8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9dxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wlt4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:46Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.357278 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72c5918a-056f-446c-b138-a1be7140a5b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0c4e266b444d8ac1a67f9b5a6705f4b871e3930daf7707cd30b589a4cce24ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a89a54d9bedad8afe45f34ce0bc2324d0ca8590c328aad9d1557b2d92e5b81fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5c74f469d2d07cfd265ee1f625b957efc0f9d1f0efd5f483c962abdfd66a080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ee6e033fa9b29f59b186e39ec83d162b40e36aab28bb1e568a54c351a8c0c79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9da7ba3cbbca328926f2372a7e581793373ec876c961f7988617c668d958b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42e357a961b833cb840c8b33ce0db65d57cc5fb37789b00a6bcd3eb67d8e33f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fd334c37221033a230de003b6ea003903bd63a0618fc37a6383705bd319bd65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fd334c37221033a230de003b6ea003903bd63a0618fc37a6383705bd319bd65\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"message\\\":\\\"enshift-authentication-operator/metrics]} name:Service_openshift-authentication-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.150:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6ea1fd71-2b40-4361-92ee-3f1ab4ec7414}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 13:45:44.455563 6278 services_controller.go:443] Built service openshift-machine-api/machine-api-operator-machine-webhook LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.250\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0217 13:45:44.455576 6278 services_controller.go:444] Built service openshift-machine-api/machine-api-operator-machine-webhook LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0217 13:45:44.455585 6278 services_controller.go:445] Built service openshift-machine-api/machine-api-operator-machine-webhook LB template configs for network=default: []services.\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-7r9gt_openshift-ovn-kubernetes(72c5918a-056f-446c-b138-a1be7140a5b0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fdc1a29a9f5b3be51beb5d28d7a09f671e4f9effeb9ed122da196da8623abe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7r9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:46Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.364001 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.364107 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.364128 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.364152 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.364170 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:46Z","lastTransitionTime":"2026-02-17T13:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.375724 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23fd4be9-debc-405d-ac05-5a5160593231\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f18bb5a144ef41de921b7b92ee560b76e3f9f9a626be4d5638db46e4688d256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84373535efe637bbd5cf8649523941f0e73ad51f06dd45c913741e2a8e7b775f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090ad37f43c1a891b1a2922b2a367e1b52ce56fd6a53e9e749f819bf281136a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://436df1e343e5ca9347dd98b9da03c74816b0bc0ec6cecd94dc3b21c38eea9b51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babf66dd8ad93003e9766e885c7626d4f6dab5bc46a6d714449a01065d7afea9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"message\\\":\\\"W0217 13:45:14.176886 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 13:45:14.177326 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771335914 cert, and key in /tmp/serving-cert-2529326345/serving-signer.crt, /tmp/serving-cert-2529326345/serving-signer.key\\\\nI0217 13:45:14.372413 1 observer_polling.go:159] Starting file observer\\\\nW0217 13:45:14.378661 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 13:45:14.378888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:45:14.382093 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529326345/tls.crt::/tmp/serving-cert-2529326345/tls.key\\\\\\\"\\\\nF0217 13:45:14.604297 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0450fd6122e12877e07084f693f67e63ac70bcecc962f7da5a32241ba14553\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:46Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.388485 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xwfsh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"962a90ec-217a-4df0-8d83-2e6663953088\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cf73503bb300f6f71385f8da17c04a4b5d84691d5d7ca3b363b2335e4905c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srrv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc181ab232df3bc23da46eabfb43d64f7c4e674152c27338d8ff5faf1c477bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srrv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xwfsh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:46Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.399728 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4b7xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft275\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft275\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4b7xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:46Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.408129 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5gxjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03c9f579-21ad-4977-a5e8-db9272a08557\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7400b87694dd74f41819e23ce3f617021e028cdcbfdad65b2213249f35d176d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s6vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5gxjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:46Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.423224 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c1afc7-6071-464b-aedc-11234d869afe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e36c4174a69c22c608db515a3fe558ec0292c53ae56a43a945c8bb19bf4cdf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bb5bc7e6f182dd6271a6d16d8441e1f6833a32fd03858a3b91d16a896339c2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e477bf1f25d554946b3657a5bd18f922b18483902fd4f9c05a452a299cb4d920\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2260fe18b8f9f829f7dde85c9bdfc32895c8bfe48e0986c8f8205708e078ff37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:46Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.435156 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a207ffb6a17f606fde32eec5ae4222e2fdf02d79f61e4f959be55814c14daf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:46Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.446541 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b9b39aa9ac4f718b75bfeffba8e09bf2fefca68625037f565f3b237b6275db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:46Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.455901 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c-metrics-certs\") pod \"network-metrics-daemon-4b7xf\" (UID: \"892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c\") " pod="openshift-multus/network-metrics-daemon-4b7xf" Feb 17 13:45:46 crc kubenswrapper[4833]: E0217 13:45:46.456035 4833 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 13:45:46 crc kubenswrapper[4833]: E0217 13:45:46.456133 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c-metrics-certs podName:892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c nodeName:}" failed. No retries permitted until 2026-02-17 13:45:48.456111091 +0000 UTC m=+38.091210544 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c-metrics-certs") pod "network-metrics-daemon-4b7xf" (UID: "892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.456434 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vx9xx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d13dbe6c-a57b-4011-9987-193ccf4939f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6637ecd7299bd61ba9b47f093048b122c90fc98990eb22155e8467c1dfd75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w7kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vx9xx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:46Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.465984 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.466031 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.466072 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.466093 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.466108 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:46Z","lastTransitionTime":"2026-02-17T13:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.467673 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4a1ca83-1919-4f9c-82de-c849cbd50e70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4ea9f4f4051db850a31f5e1081095664c8764e1033e81823e75cd0072d13a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nq5ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89a1061eccd396579d2995d2d4ee3bd83f65f7e06a951b4d77897e687a7dcb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nq5ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nmzvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:46Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.483857 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wxvlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eb74ba2-a87a-415f-8978-8ef706346aa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f08f2aa3f0028f17b13b73c760f1471c3f335f3ae02be305bddd63552ec7330b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2569baf4273e04fbca0b38e0a65c945ba4b2767cba6f2b6647012328c041bab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2569baf4273e04fbca0b38e0a65c945ba4b2767cba6f2b6647012328c041bab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec44770454d862d13715225217505d12ffefd3f5a8dae0912a7ba8a9fbccb0bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec44770454d862d13715225217505d12ffefd3f5a8dae0912a7ba8a9fbccb0bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d28180d5fc08486a1492239ab1107c9b3856334091da9d2ad7ae0e6086fbd00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d28180d5fc08486a1492239ab1107c9b3856334091da9d2ad7ae0e6086fbd00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79641e6aa81b861f7dd3dadd0cc23f677727909d85f017c761bcbbb19450b5e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79641e6aa81b861f7dd3dadd0cc23f677727909d85f017c761bcbbb19450b5e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f4ea02b7d74057eddb0946756236afdb129b5ad326b153d66357ee42caca78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f4ea02b7d74057eddb0946756236afdb129b5ad326b153d66357ee42caca78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcd939d8a8deab6a94b6df237bd627850f5bb37bd29bd0988d16452dcbd87c9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcd939d8a8deab6a94b6df237bd627850f5bb37bd29bd0988d16452dcbd87c9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wxvlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:46Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.498168 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e5a2da13ff704ed5e89100bd3a88a6349d446c2af48f88acb399efb43dc42d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac53462f7a94ff04dc94b5065169da7ec43985a5a7e7dba779f23614b61b78fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:46Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.509267 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:46Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.520840 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:46Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.540262 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.540326 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.540347 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.540371 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.540389 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:46Z","lastTransitionTime":"2026-02-17T13:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:46 crc kubenswrapper[4833]: E0217 13:45:46.554963 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:45:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:45:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:45:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:45:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"858a93ea-15f0-4bac-8fa3-badb79f68871\\\",\\\"systemUUID\\\":\\\"648b67a2-27e7-447a-8ad2-7acc2e737df4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:46Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.558829 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.558868 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.558883 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.558904 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.558918 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:46Z","lastTransitionTime":"2026-02-17T13:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:46 crc kubenswrapper[4833]: E0217 13:45:46.570894 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:45:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:45:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:45:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:45:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"858a93ea-15f0-4bac-8fa3-badb79f68871\\\",\\\"systemUUID\\\":\\\"648b67a2-27e7-447a-8ad2-7acc2e737df4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:46Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.574952 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.574997 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.575007 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.575024 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.575054 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:46Z","lastTransitionTime":"2026-02-17T13:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:46 crc kubenswrapper[4833]: E0217 13:45:46.587027 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:45:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:45:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:45:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:45:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"858a93ea-15f0-4bac-8fa3-badb79f68871\\\",\\\"systemUUID\\\":\\\"648b67a2-27e7-447a-8ad2-7acc2e737df4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:46Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.590402 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.590436 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.590444 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.590458 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.590470 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:46Z","lastTransitionTime":"2026-02-17T13:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:46 crc kubenswrapper[4833]: E0217 13:45:46.604097 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:45:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:45:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:45:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:45:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"858a93ea-15f0-4bac-8fa3-badb79f68871\\\",\\\"systemUUID\\\":\\\"648b67a2-27e7-447a-8ad2-7acc2e737df4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:46Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.607746 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.607774 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.607786 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.607804 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.607817 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:46Z","lastTransitionTime":"2026-02-17T13:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:46 crc kubenswrapper[4833]: E0217 13:45:46.623568 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:45:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:45:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:45:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:45:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"858a93ea-15f0-4bac-8fa3-badb79f68871\\\",\\\"systemUUID\\\":\\\"648b67a2-27e7-447a-8ad2-7acc2e737df4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:46Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:46 crc kubenswrapper[4833]: E0217 13:45:46.623682 4833 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.624991 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.625019 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.625027 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.625068 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.625082 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:46Z","lastTransitionTime":"2026-02-17T13:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.727271 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.727313 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.727321 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.727338 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.727349 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:46Z","lastTransitionTime":"2026-02-17T13:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.830400 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.830455 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.830463 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.830476 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.830486 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:46Z","lastTransitionTime":"2026-02-17T13:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.932991 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.933065 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.933087 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.933110 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:46 crc kubenswrapper[4833]: I0217 13:45:46.933125 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:46Z","lastTransitionTime":"2026-02-17T13:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:47 crc kubenswrapper[4833]: I0217 13:45:47.002825 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 19:15:31.717612718 +0000 UTC Feb 17 13:45:47 crc kubenswrapper[4833]: I0217 13:45:47.035718 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:47 crc kubenswrapper[4833]: I0217 13:45:47.035763 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:47 crc kubenswrapper[4833]: I0217 13:45:47.035776 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:47 crc kubenswrapper[4833]: I0217 13:45:47.035791 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:47 crc kubenswrapper[4833]: I0217 13:45:47.035804 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:47Z","lastTransitionTime":"2026-02-17T13:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:47 crc kubenswrapper[4833]: I0217 13:45:47.041186 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:45:47 crc kubenswrapper[4833]: E0217 13:45:47.041407 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:45:47 crc kubenswrapper[4833]: I0217 13:45:47.140392 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:47 crc kubenswrapper[4833]: I0217 13:45:47.140428 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:47 crc kubenswrapper[4833]: I0217 13:45:47.140436 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:47 crc kubenswrapper[4833]: I0217 13:45:47.140450 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:47 crc kubenswrapper[4833]: I0217 13:45:47.140460 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:47Z","lastTransitionTime":"2026-02-17T13:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:47 crc kubenswrapper[4833]: I0217 13:45:47.243077 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:47 crc kubenswrapper[4833]: I0217 13:45:47.243114 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:47 crc kubenswrapper[4833]: I0217 13:45:47.243124 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:47 crc kubenswrapper[4833]: I0217 13:45:47.243140 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:47 crc kubenswrapper[4833]: I0217 13:45:47.243150 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:47Z","lastTransitionTime":"2026-02-17T13:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:47 crc kubenswrapper[4833]: I0217 13:45:47.345249 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:47 crc kubenswrapper[4833]: I0217 13:45:47.345287 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:47 crc kubenswrapper[4833]: I0217 13:45:47.345296 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:47 crc kubenswrapper[4833]: I0217 13:45:47.345312 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:47 crc kubenswrapper[4833]: I0217 13:45:47.345321 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:47Z","lastTransitionTime":"2026-02-17T13:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:47 crc kubenswrapper[4833]: I0217 13:45:47.447320 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:47 crc kubenswrapper[4833]: I0217 13:45:47.447357 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:47 crc kubenswrapper[4833]: I0217 13:45:47.447370 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:47 crc kubenswrapper[4833]: I0217 13:45:47.447387 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:47 crc kubenswrapper[4833]: I0217 13:45:47.447398 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:47Z","lastTransitionTime":"2026-02-17T13:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:47 crc kubenswrapper[4833]: I0217 13:45:47.550248 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:47 crc kubenswrapper[4833]: I0217 13:45:47.550607 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:47 crc kubenswrapper[4833]: I0217 13:45:47.550801 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:47 crc kubenswrapper[4833]: I0217 13:45:47.550997 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:47 crc kubenswrapper[4833]: I0217 13:45:47.551237 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:47Z","lastTransitionTime":"2026-02-17T13:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:47 crc kubenswrapper[4833]: I0217 13:45:47.654083 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:47 crc kubenswrapper[4833]: I0217 13:45:47.654135 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:47 crc kubenswrapper[4833]: I0217 13:45:47.654145 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:47 crc kubenswrapper[4833]: I0217 13:45:47.654166 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:47 crc kubenswrapper[4833]: I0217 13:45:47.654178 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:47Z","lastTransitionTime":"2026-02-17T13:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:47 crc kubenswrapper[4833]: I0217 13:45:47.756227 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:47 crc kubenswrapper[4833]: I0217 13:45:47.756549 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:47 crc kubenswrapper[4833]: I0217 13:45:47.756562 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:47 crc kubenswrapper[4833]: I0217 13:45:47.756578 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:47 crc kubenswrapper[4833]: I0217 13:45:47.756589 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:47Z","lastTransitionTime":"2026-02-17T13:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:47 crc kubenswrapper[4833]: I0217 13:45:47.858771 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:47 crc kubenswrapper[4833]: I0217 13:45:47.858852 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:47 crc kubenswrapper[4833]: I0217 13:45:47.858871 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:47 crc kubenswrapper[4833]: I0217 13:45:47.858893 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:47 crc kubenswrapper[4833]: I0217 13:45:47.858910 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:47Z","lastTransitionTime":"2026-02-17T13:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:47 crc kubenswrapper[4833]: I0217 13:45:47.961507 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:47 crc kubenswrapper[4833]: I0217 13:45:47.961541 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:47 crc kubenswrapper[4833]: I0217 13:45:47.961552 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:47 crc kubenswrapper[4833]: I0217 13:45:47.961567 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:47 crc kubenswrapper[4833]: I0217 13:45:47.961578 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:47Z","lastTransitionTime":"2026-02-17T13:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:48 crc kubenswrapper[4833]: I0217 13:45:48.003374 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 08:38:06.013229825 +0000 UTC Feb 17 13:45:48 crc kubenswrapper[4833]: I0217 13:45:48.040849 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b7xf" Feb 17 13:45:48 crc kubenswrapper[4833]: I0217 13:45:48.040849 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:45:48 crc kubenswrapper[4833]: E0217 13:45:48.041139 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b7xf" podUID="892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c" Feb 17 13:45:48 crc kubenswrapper[4833]: E0217 13:45:48.041255 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:45:48 crc kubenswrapper[4833]: I0217 13:45:48.040886 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:45:48 crc kubenswrapper[4833]: E0217 13:45:48.041425 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:45:48 crc kubenswrapper[4833]: I0217 13:45:48.064379 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:48 crc kubenswrapper[4833]: I0217 13:45:48.064463 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:48 crc kubenswrapper[4833]: I0217 13:45:48.064482 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:48 crc kubenswrapper[4833]: I0217 13:45:48.064505 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:48 crc kubenswrapper[4833]: I0217 13:45:48.064561 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:48Z","lastTransitionTime":"2026-02-17T13:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:48 crc kubenswrapper[4833]: I0217 13:45:48.167697 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:48 crc kubenswrapper[4833]: I0217 13:45:48.167768 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:48 crc kubenswrapper[4833]: I0217 13:45:48.167785 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:48 crc kubenswrapper[4833]: I0217 13:45:48.167806 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:48 crc kubenswrapper[4833]: I0217 13:45:48.167823 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:48Z","lastTransitionTime":"2026-02-17T13:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:48 crc kubenswrapper[4833]: I0217 13:45:48.270917 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:48 crc kubenswrapper[4833]: I0217 13:45:48.271007 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:48 crc kubenswrapper[4833]: I0217 13:45:48.271019 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:48 crc kubenswrapper[4833]: I0217 13:45:48.271036 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:48 crc kubenswrapper[4833]: I0217 13:45:48.271064 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:48Z","lastTransitionTime":"2026-02-17T13:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:48 crc kubenswrapper[4833]: I0217 13:45:48.374292 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:48 crc kubenswrapper[4833]: I0217 13:45:48.374331 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:48 crc kubenswrapper[4833]: I0217 13:45:48.374346 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:48 crc kubenswrapper[4833]: I0217 13:45:48.374365 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:48 crc kubenswrapper[4833]: I0217 13:45:48.374378 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:48Z","lastTransitionTime":"2026-02-17T13:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:48 crc kubenswrapper[4833]: I0217 13:45:48.475165 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c-metrics-certs\") pod \"network-metrics-daemon-4b7xf\" (UID: \"892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c\") " pod="openshift-multus/network-metrics-daemon-4b7xf" Feb 17 13:45:48 crc kubenswrapper[4833]: E0217 13:45:48.475303 4833 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 13:45:48 crc kubenswrapper[4833]: E0217 13:45:48.475353 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c-metrics-certs podName:892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c nodeName:}" failed. No retries permitted until 2026-02-17 13:45:52.47534047 +0000 UTC m=+42.110439903 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c-metrics-certs") pod "network-metrics-daemon-4b7xf" (UID: "892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 13:45:48 crc kubenswrapper[4833]: I0217 13:45:48.476391 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:48 crc kubenswrapper[4833]: I0217 13:45:48.476440 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:48 crc kubenswrapper[4833]: I0217 13:45:48.476453 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:48 crc kubenswrapper[4833]: I0217 13:45:48.476468 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:48 crc kubenswrapper[4833]: I0217 13:45:48.476480 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:48Z","lastTransitionTime":"2026-02-17T13:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:48 crc kubenswrapper[4833]: I0217 13:45:48.578415 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:48 crc kubenswrapper[4833]: I0217 13:45:48.578478 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:48 crc kubenswrapper[4833]: I0217 13:45:48.578504 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:48 crc kubenswrapper[4833]: I0217 13:45:48.578533 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:48 crc kubenswrapper[4833]: I0217 13:45:48.578555 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:48Z","lastTransitionTime":"2026-02-17T13:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:48 crc kubenswrapper[4833]: I0217 13:45:48.680918 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:48 crc kubenswrapper[4833]: I0217 13:45:48.680960 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:48 crc kubenswrapper[4833]: I0217 13:45:48.680971 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:48 crc kubenswrapper[4833]: I0217 13:45:48.680988 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:48 crc kubenswrapper[4833]: I0217 13:45:48.680999 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:48Z","lastTransitionTime":"2026-02-17T13:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:48 crc kubenswrapper[4833]: I0217 13:45:48.782726 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:48 crc kubenswrapper[4833]: I0217 13:45:48.782767 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:48 crc kubenswrapper[4833]: I0217 13:45:48.782775 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:48 crc kubenswrapper[4833]: I0217 13:45:48.782788 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:48 crc kubenswrapper[4833]: I0217 13:45:48.782797 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:48Z","lastTransitionTime":"2026-02-17T13:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:48 crc kubenswrapper[4833]: I0217 13:45:48.885031 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:48 crc kubenswrapper[4833]: I0217 13:45:48.885123 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:48 crc kubenswrapper[4833]: I0217 13:45:48.885140 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:48 crc kubenswrapper[4833]: I0217 13:45:48.885165 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:48 crc kubenswrapper[4833]: I0217 13:45:48.885183 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:48Z","lastTransitionTime":"2026-02-17T13:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:48 crc kubenswrapper[4833]: I0217 13:45:48.987327 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:48 crc kubenswrapper[4833]: I0217 13:45:48.987369 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:48 crc kubenswrapper[4833]: I0217 13:45:48.987377 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:48 crc kubenswrapper[4833]: I0217 13:45:48.987390 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:48 crc kubenswrapper[4833]: I0217 13:45:48.987398 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:48Z","lastTransitionTime":"2026-02-17T13:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:49 crc kubenswrapper[4833]: I0217 13:45:49.004087 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 23:18:16.020168635 +0000 UTC Feb 17 13:45:49 crc kubenswrapper[4833]: I0217 13:45:49.041544 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:45:49 crc kubenswrapper[4833]: E0217 13:45:49.041722 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:45:49 crc kubenswrapper[4833]: I0217 13:45:49.089611 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:49 crc kubenswrapper[4833]: I0217 13:45:49.089677 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:49 crc kubenswrapper[4833]: I0217 13:45:49.089693 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:49 crc kubenswrapper[4833]: I0217 13:45:49.089716 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:49 crc kubenswrapper[4833]: I0217 13:45:49.089733 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:49Z","lastTransitionTime":"2026-02-17T13:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:49 crc kubenswrapper[4833]: I0217 13:45:49.192289 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:49 crc kubenswrapper[4833]: I0217 13:45:49.192358 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:49 crc kubenswrapper[4833]: I0217 13:45:49.192389 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:49 crc kubenswrapper[4833]: I0217 13:45:49.192416 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:49 crc kubenswrapper[4833]: I0217 13:45:49.192433 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:49Z","lastTransitionTime":"2026-02-17T13:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:49 crc kubenswrapper[4833]: I0217 13:45:49.295019 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:49 crc kubenswrapper[4833]: I0217 13:45:49.295120 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:49 crc kubenswrapper[4833]: I0217 13:45:49.295137 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:49 crc kubenswrapper[4833]: I0217 13:45:49.295161 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:49 crc kubenswrapper[4833]: I0217 13:45:49.295184 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:49Z","lastTransitionTime":"2026-02-17T13:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:49 crc kubenswrapper[4833]: I0217 13:45:49.397961 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:49 crc kubenswrapper[4833]: I0217 13:45:49.398024 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:49 crc kubenswrapper[4833]: I0217 13:45:49.398080 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:49 crc kubenswrapper[4833]: I0217 13:45:49.398115 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:49 crc kubenswrapper[4833]: I0217 13:45:49.398137 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:49Z","lastTransitionTime":"2026-02-17T13:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:49 crc kubenswrapper[4833]: I0217 13:45:49.500584 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:49 crc kubenswrapper[4833]: I0217 13:45:49.500641 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:49 crc kubenswrapper[4833]: I0217 13:45:49.500653 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:49 crc kubenswrapper[4833]: I0217 13:45:49.500669 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:49 crc kubenswrapper[4833]: I0217 13:45:49.500679 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:49Z","lastTransitionTime":"2026-02-17T13:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:49 crc kubenswrapper[4833]: I0217 13:45:49.602488 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:49 crc kubenswrapper[4833]: I0217 13:45:49.602559 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:49 crc kubenswrapper[4833]: I0217 13:45:49.602581 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:49 crc kubenswrapper[4833]: I0217 13:45:49.602609 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:49 crc kubenswrapper[4833]: I0217 13:45:49.602631 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:49Z","lastTransitionTime":"2026-02-17T13:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:49 crc kubenswrapper[4833]: I0217 13:45:49.705571 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:49 crc kubenswrapper[4833]: I0217 13:45:49.705653 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:49 crc kubenswrapper[4833]: I0217 13:45:49.705688 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:49 crc kubenswrapper[4833]: I0217 13:45:49.705722 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:49 crc kubenswrapper[4833]: I0217 13:45:49.705743 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:49Z","lastTransitionTime":"2026-02-17T13:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:49 crc kubenswrapper[4833]: I0217 13:45:49.808435 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:49 crc kubenswrapper[4833]: I0217 13:45:49.808463 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:49 crc kubenswrapper[4833]: I0217 13:45:49.808472 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:49 crc kubenswrapper[4833]: I0217 13:45:49.808484 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:49 crc kubenswrapper[4833]: I0217 13:45:49.808495 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:49Z","lastTransitionTime":"2026-02-17T13:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:49 crc kubenswrapper[4833]: I0217 13:45:49.910364 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:49 crc kubenswrapper[4833]: I0217 13:45:49.910438 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:49 crc kubenswrapper[4833]: I0217 13:45:49.910463 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:49 crc kubenswrapper[4833]: I0217 13:45:49.910491 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:49 crc kubenswrapper[4833]: I0217 13:45:49.910513 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:49Z","lastTransitionTime":"2026-02-17T13:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:50 crc kubenswrapper[4833]: I0217 13:45:50.005082 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 20:46:19.082519367 +0000 UTC Feb 17 13:45:50 crc kubenswrapper[4833]: I0217 13:45:50.013135 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:50 crc kubenswrapper[4833]: I0217 13:45:50.013165 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:50 crc kubenswrapper[4833]: I0217 13:45:50.013173 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:50 crc kubenswrapper[4833]: I0217 13:45:50.013186 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:50 crc kubenswrapper[4833]: I0217 13:45:50.013195 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:50Z","lastTransitionTime":"2026-02-17T13:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:50 crc kubenswrapper[4833]: I0217 13:45:50.041403 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:45:50 crc kubenswrapper[4833]: I0217 13:45:50.041486 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b7xf" Feb 17 13:45:50 crc kubenswrapper[4833]: E0217 13:45:50.041533 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:45:50 crc kubenswrapper[4833]: I0217 13:45:50.041487 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:45:50 crc kubenswrapper[4833]: E0217 13:45:50.041660 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b7xf" podUID="892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c" Feb 17 13:45:50 crc kubenswrapper[4833]: E0217 13:45:50.041779 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:45:50 crc kubenswrapper[4833]: I0217 13:45:50.116905 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:50 crc kubenswrapper[4833]: I0217 13:45:50.116966 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:50 crc kubenswrapper[4833]: I0217 13:45:50.116976 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:50 crc kubenswrapper[4833]: I0217 13:45:50.117009 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:50 crc kubenswrapper[4833]: I0217 13:45:50.117022 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:50Z","lastTransitionTime":"2026-02-17T13:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:50 crc kubenswrapper[4833]: I0217 13:45:50.219953 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:50 crc kubenswrapper[4833]: I0217 13:45:50.220005 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:50 crc kubenswrapper[4833]: I0217 13:45:50.220032 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:50 crc kubenswrapper[4833]: I0217 13:45:50.220059 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:50 crc kubenswrapper[4833]: I0217 13:45:50.220071 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:50Z","lastTransitionTime":"2026-02-17T13:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:50 crc kubenswrapper[4833]: I0217 13:45:50.323417 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:50 crc kubenswrapper[4833]: I0217 13:45:50.323463 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:50 crc kubenswrapper[4833]: I0217 13:45:50.323479 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:50 crc kubenswrapper[4833]: I0217 13:45:50.323494 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:50 crc kubenswrapper[4833]: I0217 13:45:50.323505 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:50Z","lastTransitionTime":"2026-02-17T13:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:50 crc kubenswrapper[4833]: I0217 13:45:50.426431 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:50 crc kubenswrapper[4833]: I0217 13:45:50.426477 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:50 crc kubenswrapper[4833]: I0217 13:45:50.426489 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:50 crc kubenswrapper[4833]: I0217 13:45:50.426507 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:50 crc kubenswrapper[4833]: I0217 13:45:50.426519 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:50Z","lastTransitionTime":"2026-02-17T13:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:50 crc kubenswrapper[4833]: I0217 13:45:50.529442 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:50 crc kubenswrapper[4833]: I0217 13:45:50.529485 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:50 crc kubenswrapper[4833]: I0217 13:45:50.529496 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:50 crc kubenswrapper[4833]: I0217 13:45:50.529512 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:50 crc kubenswrapper[4833]: I0217 13:45:50.529523 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:50Z","lastTransitionTime":"2026-02-17T13:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:50 crc kubenswrapper[4833]: I0217 13:45:50.632065 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:50 crc kubenswrapper[4833]: I0217 13:45:50.632099 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:50 crc kubenswrapper[4833]: I0217 13:45:50.632108 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:50 crc kubenswrapper[4833]: I0217 13:45:50.632122 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:50 crc kubenswrapper[4833]: I0217 13:45:50.632131 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:50Z","lastTransitionTime":"2026-02-17T13:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:50 crc kubenswrapper[4833]: I0217 13:45:50.735108 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:50 crc kubenswrapper[4833]: I0217 13:45:50.735142 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:50 crc kubenswrapper[4833]: I0217 13:45:50.735154 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:50 crc kubenswrapper[4833]: I0217 13:45:50.735168 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:50 crc kubenswrapper[4833]: I0217 13:45:50.735178 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:50Z","lastTransitionTime":"2026-02-17T13:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:50 crc kubenswrapper[4833]: I0217 13:45:50.837972 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:50 crc kubenswrapper[4833]: I0217 13:45:50.838027 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:50 crc kubenswrapper[4833]: I0217 13:45:50.838058 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:50 crc kubenswrapper[4833]: I0217 13:45:50.838082 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:50 crc kubenswrapper[4833]: I0217 13:45:50.838095 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:50Z","lastTransitionTime":"2026-02-17T13:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:50 crc kubenswrapper[4833]: I0217 13:45:50.941241 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:50 crc kubenswrapper[4833]: I0217 13:45:50.941275 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:50 crc kubenswrapper[4833]: I0217 13:45:50.941283 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:50 crc kubenswrapper[4833]: I0217 13:45:50.941295 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:50 crc kubenswrapper[4833]: I0217 13:45:50.941306 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:50Z","lastTransitionTime":"2026-02-17T13:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:51 crc kubenswrapper[4833]: I0217 13:45:51.006112 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 19:30:42.129360201 +0000 UTC Feb 17 13:45:51 crc kubenswrapper[4833]: I0217 13:45:51.040628 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:45:51 crc kubenswrapper[4833]: E0217 13:45:51.040782 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:45:51 crc kubenswrapper[4833]: I0217 13:45:51.043540 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:51 crc kubenswrapper[4833]: I0217 13:45:51.043612 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:51 crc kubenswrapper[4833]: I0217 13:45:51.043624 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:51 crc kubenswrapper[4833]: I0217 13:45:51.043642 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:51 crc kubenswrapper[4833]: I0217 13:45:51.043754 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:51Z","lastTransitionTime":"2026-02-17T13:45:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:51 crc kubenswrapper[4833]: I0217 13:45:51.053540 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:51 crc kubenswrapper[4833]: I0217 13:45:51.064871 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:51 crc kubenswrapper[4833]: I0217 13:45:51.077669 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e5a2da13ff704ed5e89100bd3a88a6349d446c2af48f88acb399efb43dc42d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac53462f7a94ff04dc94b5065169da7ec43985a5a7e7dba779f23614b61b78fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:51 crc kubenswrapper[4833]: I0217 13:45:51.088681 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:51 crc kubenswrapper[4833]: I0217 13:45:51.100358 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wlt4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d2bf39d36b6e8998e442f36f84198a363d7f22f56a05957a4a242459e6ff8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9dxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wlt4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:51 crc kubenswrapper[4833]: I0217 13:45:51.119222 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72c5918a-056f-446c-b138-a1be7140a5b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0c4e266b444d8ac1a67f9b5a6705f4b871e3930daf7707cd30b589a4cce24ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a89a54d9bedad8afe45f34ce0bc2324d0ca8590c328aad9d1557b2d92e5b81fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5c74f469d2d07cfd265ee1f625b957efc0f9d1f0efd5f483c962abdfd66a080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ee6e033fa9b29f59b186e39ec83d162b40e36aab28bb1e568a54c351a8c0c79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9da7ba3cbbca328926f2372a7e581793373ec876c961f7988617c668d958b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42e357a961b833cb840c8b33ce0db65d57cc5fb37789b00a6bcd3eb67d8e33f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fd334c37221033a230de003b6ea003903bd63a0618fc37a6383705bd319bd65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fd334c37221033a230de003b6ea003903bd63a0618fc37a6383705bd319bd65\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"message\\\":\\\"enshift-authentication-operator/metrics]} name:Service_openshift-authentication-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.150:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6ea1fd71-2b40-4361-92ee-3f1ab4ec7414}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 13:45:44.455563 6278 services_controller.go:443] Built service openshift-machine-api/machine-api-operator-machine-webhook LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.250\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0217 13:45:44.455576 6278 services_controller.go:444] Built service openshift-machine-api/machine-api-operator-machine-webhook LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0217 13:45:44.455585 6278 services_controller.go:445] Built service openshift-machine-api/machine-api-operator-machine-webhook LB template configs for network=default: []services.\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-7r9gt_openshift-ovn-kubernetes(72c5918a-056f-446c-b138-a1be7140a5b0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fdc1a29a9f5b3be51beb5d28d7a09f671e4f9effeb9ed122da196da8623abe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7r9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:51 crc kubenswrapper[4833]: I0217 13:45:51.132251 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23fd4be9-debc-405d-ac05-5a5160593231\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f18bb5a144ef41de921b7b92ee560b76e3f9f9a626be4d5638db46e4688d256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84373535efe637bbd5cf8649523941f0e73ad51f06dd45c913741e2a8e7b775f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090ad37f43c1a891b1a2922b2a367e1b52ce56fd6a53e9e749f819bf281136a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://436df1e343e5ca9347dd98b9da03c74816b0bc0ec6cecd94dc3b21c38eea9b51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babf66dd8ad93003e9766e885c7626d4f6dab5bc46a6d714449a01065d7afea9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"message\\\":\\\"W0217 13:45:14.176886 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 13:45:14.177326 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771335914 cert, and key in /tmp/serving-cert-2529326345/serving-signer.crt, /tmp/serving-cert-2529326345/serving-signer.key\\\\nI0217 13:45:14.372413 1 observer_polling.go:159] Starting file observer\\\\nW0217 13:45:14.378661 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 13:45:14.378888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:45:14.382093 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529326345/tls.crt::/tmp/serving-cert-2529326345/tls.key\\\\\\\"\\\\nF0217 13:45:14.604297 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0450fd6122e12877e07084f693f67e63ac70bcecc962f7da5a32241ba14553\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:51 crc kubenswrapper[4833]: I0217 13:45:51.146500 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:51 crc kubenswrapper[4833]: I0217 13:45:51.146543 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:51 crc kubenswrapper[4833]: I0217 13:45:51.146555 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:51 crc kubenswrapper[4833]: I0217 13:45:51.146573 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:51 crc kubenswrapper[4833]: I0217 13:45:51.146586 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:51Z","lastTransitionTime":"2026-02-17T13:45:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:51 crc kubenswrapper[4833]: I0217 13:45:51.153623 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e08f74a-75d7-4dce-9297-27140e91962a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a2013b70df34194f895ea7c2e5f2650966fa2d06518733fa4411893e5b2b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbc961a623ca1f4237f238856a41acfb724aadd5fa3feda70410c352b6f750c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80bdd44d3b326c9bb61f21b221c63d4647e2cc6edc95fe568b90e76f4329cf1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9bc54d1057546b9c8d7d1527bb2ae93c1c713b4d6e5879477157af90fc0d89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e88eaef317b2beb0c55811634e6752439b7c7725072f55a42f5ab49c2ac2385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:51 crc kubenswrapper[4833]: I0217 13:45:51.163886 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4b7xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft275\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft275\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4b7xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:51 crc kubenswrapper[4833]: I0217 13:45:51.174816 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5gxjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03c9f579-21ad-4977-a5e8-db9272a08557\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7400b87694dd74f41819e23ce3f617021e028cdcbfdad65b2213249f35d176d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s6vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5gxjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:51 crc kubenswrapper[4833]: I0217 13:45:51.187234 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xwfsh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"962a90ec-217a-4df0-8d83-2e6663953088\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cf73503bb300f6f71385f8da17c04a4b5d84691d5d7ca3b363b2335e4905c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srrv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc181ab232df3bc23da46eabfb43d64f7c4e674152c27338d8ff5faf1c477bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srrv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xwfsh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:51 crc kubenswrapper[4833]: I0217 13:45:51.201615 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a207ffb6a17f606fde32eec5ae4222e2fdf02d79f61e4f959be55814c14daf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:51 crc kubenswrapper[4833]: I0217 13:45:51.218268 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b9b39aa9ac4f718b75bfeffba8e09bf2fefca68625037f565f3b237b6275db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:51 crc kubenswrapper[4833]: I0217 13:45:51.232884 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vx9xx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d13dbe6c-a57b-4011-9987-193ccf4939f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6637ecd7299bd61ba9b47f093048b122c90fc98990eb22155e8467c1dfd75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w7kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vx9xx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:51 crc kubenswrapper[4833]: I0217 13:45:51.245736 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4a1ca83-1919-4f9c-82de-c849cbd50e70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4ea9f4f4051db850a31f5e1081095664c8764e1033e81823e75cd0072d13a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nq5ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89a1061eccd396579d2995d2d4ee3bd83f65f7e06a951b4d77897e687a7dcb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nq5ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nmzvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:51 crc kubenswrapper[4833]: I0217 13:45:51.248307 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:51 crc kubenswrapper[4833]: I0217 13:45:51.248336 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:51 crc kubenswrapper[4833]: I0217 13:45:51.248346 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:51 crc kubenswrapper[4833]: I0217 13:45:51.248361 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:51 crc kubenswrapper[4833]: I0217 13:45:51.248371 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:51Z","lastTransitionTime":"2026-02-17T13:45:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:51 crc kubenswrapper[4833]: I0217 13:45:51.259825 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wxvlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eb74ba2-a87a-415f-8978-8ef706346aa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f08f2aa3f0028f17b13b73c760f1471c3f335f3ae02be305bddd63552ec7330b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2569baf4273e04fbca0b38e0a65c945ba4b2767cba6f2b6647012328c041bab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2569baf4273e04fbca0b38e0a65c945ba4b2767cba6f2b6647012328c041bab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec44770454d862d13715225217505d12ffefd3f5a8dae0912a7ba8a9fbccb0bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec44770454d862d13715225217505d12ffefd3f5a8dae0912a7ba8a9fbccb0bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d28180d5fc08486a1492239ab1107c9b3856334091da9d2ad7ae0e6086fbd00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d28180d5fc08486a1492239ab1107c9b3856334091da9d2ad7ae0e6086fbd00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79641e6aa81b861f7dd3dadd0cc23f677727909d85f017c761bcbbb19450b5e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79641e6aa81b861f7dd3dadd0cc23f677727909d85f017c761bcbbb19450b5e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f4ea02b7d74057eddb0946756236afdb129b5ad326b153d66357ee42caca78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f4ea02b7d74057eddb0946756236afdb129b5ad326b153d66357ee42caca78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcd939d8a8deab6a94b6df237bd627850f5bb37bd29bd0988d16452dcbd87c9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcd939d8a8deab6a94b6df237bd627850f5bb37bd29bd0988d16452dcbd87c9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wxvlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:51 crc kubenswrapper[4833]: I0217 13:45:51.274676 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c1afc7-6071-464b-aedc-11234d869afe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e36c4174a69c22c608db515a3fe558ec0292c53ae56a43a945c8bb19bf4cdf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bb5bc7e6f182dd6271a6d16d8441e1f6833a32fd03858a3b91d16a896339c2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e477bf1f25d554946b3657a5bd18f922b18483902fd4f9c05a452a299cb4d920\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2260fe18b8f9f829f7dde85c9bdfc32895c8bfe48e0986c8f8205708e078ff37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:51 crc kubenswrapper[4833]: I0217 13:45:51.350893 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:51 crc kubenswrapper[4833]: I0217 13:45:51.350943 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:51 crc kubenswrapper[4833]: I0217 13:45:51.350955 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:51 crc kubenswrapper[4833]: I0217 13:45:51.350972 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:51 crc kubenswrapper[4833]: I0217 13:45:51.350987 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:51Z","lastTransitionTime":"2026-02-17T13:45:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:51 crc kubenswrapper[4833]: I0217 13:45:51.453070 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:51 crc kubenswrapper[4833]: I0217 13:45:51.453149 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:51 crc kubenswrapper[4833]: I0217 13:45:51.453158 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:51 crc kubenswrapper[4833]: I0217 13:45:51.453170 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:51 crc kubenswrapper[4833]: I0217 13:45:51.453180 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:51Z","lastTransitionTime":"2026-02-17T13:45:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:51 crc kubenswrapper[4833]: I0217 13:45:51.556294 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:51 crc kubenswrapper[4833]: I0217 13:45:51.556329 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:51 crc kubenswrapper[4833]: I0217 13:45:51.556361 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:51 crc kubenswrapper[4833]: I0217 13:45:51.556379 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:51 crc kubenswrapper[4833]: I0217 13:45:51.556390 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:51Z","lastTransitionTime":"2026-02-17T13:45:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:51 crc kubenswrapper[4833]: I0217 13:45:51.658998 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:51 crc kubenswrapper[4833]: I0217 13:45:51.659060 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:51 crc kubenswrapper[4833]: I0217 13:45:51.659071 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:51 crc kubenswrapper[4833]: I0217 13:45:51.659086 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:51 crc kubenswrapper[4833]: I0217 13:45:51.659094 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:51Z","lastTransitionTime":"2026-02-17T13:45:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:51 crc kubenswrapper[4833]: I0217 13:45:51.761924 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:51 crc kubenswrapper[4833]: I0217 13:45:51.761971 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:51 crc kubenswrapper[4833]: I0217 13:45:51.761983 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:51 crc kubenswrapper[4833]: I0217 13:45:51.761999 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:51 crc kubenswrapper[4833]: I0217 13:45:51.762013 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:51Z","lastTransitionTime":"2026-02-17T13:45:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:51 crc kubenswrapper[4833]: I0217 13:45:51.865499 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:51 crc kubenswrapper[4833]: I0217 13:45:51.865568 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:51 crc kubenswrapper[4833]: I0217 13:45:51.865582 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:51 crc kubenswrapper[4833]: I0217 13:45:51.865613 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:51 crc kubenswrapper[4833]: I0217 13:45:51.865628 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:51Z","lastTransitionTime":"2026-02-17T13:45:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:51 crc kubenswrapper[4833]: I0217 13:45:51.969360 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:51 crc kubenswrapper[4833]: I0217 13:45:51.969407 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:51 crc kubenswrapper[4833]: I0217 13:45:51.969420 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:51 crc kubenswrapper[4833]: I0217 13:45:51.969436 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:51 crc kubenswrapper[4833]: I0217 13:45:51.969447 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:51Z","lastTransitionTime":"2026-02-17T13:45:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:52 crc kubenswrapper[4833]: I0217 13:45:52.007095 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 18:29:39.580515714 +0000 UTC Feb 17 13:45:52 crc kubenswrapper[4833]: I0217 13:45:52.041109 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:45:52 crc kubenswrapper[4833]: I0217 13:45:52.041199 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b7xf" Feb 17 13:45:52 crc kubenswrapper[4833]: E0217 13:45:52.041248 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:45:52 crc kubenswrapper[4833]: I0217 13:45:52.041205 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:45:52 crc kubenswrapper[4833]: E0217 13:45:52.041350 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b7xf" podUID="892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c" Feb 17 13:45:52 crc kubenswrapper[4833]: E0217 13:45:52.041448 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:45:52 crc kubenswrapper[4833]: I0217 13:45:52.072468 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:52 crc kubenswrapper[4833]: I0217 13:45:52.072516 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:52 crc kubenswrapper[4833]: I0217 13:45:52.072528 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:52 crc kubenswrapper[4833]: I0217 13:45:52.072545 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:52 crc kubenswrapper[4833]: I0217 13:45:52.072557 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:52Z","lastTransitionTime":"2026-02-17T13:45:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:52 crc kubenswrapper[4833]: I0217 13:45:52.175277 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:52 crc kubenswrapper[4833]: I0217 13:45:52.175330 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:52 crc kubenswrapper[4833]: I0217 13:45:52.175342 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:52 crc kubenswrapper[4833]: I0217 13:45:52.175361 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:52 crc kubenswrapper[4833]: I0217 13:45:52.175373 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:52Z","lastTransitionTime":"2026-02-17T13:45:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:52 crc kubenswrapper[4833]: I0217 13:45:52.278024 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:52 crc kubenswrapper[4833]: I0217 13:45:52.278079 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:52 crc kubenswrapper[4833]: I0217 13:45:52.278087 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:52 crc kubenswrapper[4833]: I0217 13:45:52.278099 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:52 crc kubenswrapper[4833]: I0217 13:45:52.278108 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:52Z","lastTransitionTime":"2026-02-17T13:45:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:52 crc kubenswrapper[4833]: I0217 13:45:52.381312 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:52 crc kubenswrapper[4833]: I0217 13:45:52.381351 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:52 crc kubenswrapper[4833]: I0217 13:45:52.381362 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:52 crc kubenswrapper[4833]: I0217 13:45:52.381379 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:52 crc kubenswrapper[4833]: I0217 13:45:52.381390 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:52Z","lastTransitionTime":"2026-02-17T13:45:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:52 crc kubenswrapper[4833]: I0217 13:45:52.483698 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:52 crc kubenswrapper[4833]: I0217 13:45:52.483747 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:52 crc kubenswrapper[4833]: I0217 13:45:52.483758 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:52 crc kubenswrapper[4833]: I0217 13:45:52.483774 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:52 crc kubenswrapper[4833]: I0217 13:45:52.483783 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:52Z","lastTransitionTime":"2026-02-17T13:45:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:52 crc kubenswrapper[4833]: I0217 13:45:52.518877 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c-metrics-certs\") pod \"network-metrics-daemon-4b7xf\" (UID: \"892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c\") " pod="openshift-multus/network-metrics-daemon-4b7xf" Feb 17 13:45:52 crc kubenswrapper[4833]: E0217 13:45:52.519061 4833 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 13:45:52 crc kubenswrapper[4833]: E0217 13:45:52.519121 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c-metrics-certs podName:892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c nodeName:}" failed. No retries permitted until 2026-02-17 13:46:00.519105597 +0000 UTC m=+50.154205030 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c-metrics-certs") pod "network-metrics-daemon-4b7xf" (UID: "892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 13:45:52 crc kubenswrapper[4833]: I0217 13:45:52.586599 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:52 crc kubenswrapper[4833]: I0217 13:45:52.586635 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:52 crc kubenswrapper[4833]: I0217 13:45:52.586645 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:52 crc kubenswrapper[4833]: I0217 13:45:52.586660 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:52 crc kubenswrapper[4833]: I0217 13:45:52.586673 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:52Z","lastTransitionTime":"2026-02-17T13:45:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:52 crc kubenswrapper[4833]: I0217 13:45:52.688536 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:52 crc kubenswrapper[4833]: I0217 13:45:52.688574 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:52 crc kubenswrapper[4833]: I0217 13:45:52.688586 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:52 crc kubenswrapper[4833]: I0217 13:45:52.688600 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:52 crc kubenswrapper[4833]: I0217 13:45:52.688611 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:52Z","lastTransitionTime":"2026-02-17T13:45:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:52 crc kubenswrapper[4833]: I0217 13:45:52.791192 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:52 crc kubenswrapper[4833]: I0217 13:45:52.791241 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:52 crc kubenswrapper[4833]: I0217 13:45:52.791281 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:52 crc kubenswrapper[4833]: I0217 13:45:52.791297 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:52 crc kubenswrapper[4833]: I0217 13:45:52.791307 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:52Z","lastTransitionTime":"2026-02-17T13:45:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:52 crc kubenswrapper[4833]: I0217 13:45:52.894016 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:52 crc kubenswrapper[4833]: I0217 13:45:52.894071 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:52 crc kubenswrapper[4833]: I0217 13:45:52.894083 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:52 crc kubenswrapper[4833]: I0217 13:45:52.894107 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:52 crc kubenswrapper[4833]: I0217 13:45:52.894121 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:52Z","lastTransitionTime":"2026-02-17T13:45:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:52 crc kubenswrapper[4833]: I0217 13:45:52.996123 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:52 crc kubenswrapper[4833]: I0217 13:45:52.996167 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:52 crc kubenswrapper[4833]: I0217 13:45:52.996176 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:52 crc kubenswrapper[4833]: I0217 13:45:52.996190 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:52 crc kubenswrapper[4833]: I0217 13:45:52.996200 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:52Z","lastTransitionTime":"2026-02-17T13:45:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:53 crc kubenswrapper[4833]: I0217 13:45:53.007994 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 06:39:53.161863386 +0000 UTC Feb 17 13:45:53 crc kubenswrapper[4833]: I0217 13:45:53.040682 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:45:53 crc kubenswrapper[4833]: E0217 13:45:53.040848 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:45:53 crc kubenswrapper[4833]: I0217 13:45:53.098745 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:53 crc kubenswrapper[4833]: I0217 13:45:53.098807 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:53 crc kubenswrapper[4833]: I0217 13:45:53.098824 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:53 crc kubenswrapper[4833]: I0217 13:45:53.098848 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:53 crc kubenswrapper[4833]: I0217 13:45:53.098864 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:53Z","lastTransitionTime":"2026-02-17T13:45:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:53 crc kubenswrapper[4833]: I0217 13:45:53.202121 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:53 crc kubenswrapper[4833]: I0217 13:45:53.202170 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:53 crc kubenswrapper[4833]: I0217 13:45:53.202183 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:53 crc kubenswrapper[4833]: I0217 13:45:53.202200 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:53 crc kubenswrapper[4833]: I0217 13:45:53.202210 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:53Z","lastTransitionTime":"2026-02-17T13:45:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:53 crc kubenswrapper[4833]: I0217 13:45:53.304534 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:53 crc kubenswrapper[4833]: I0217 13:45:53.304573 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:53 crc kubenswrapper[4833]: I0217 13:45:53.304613 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:53 crc kubenswrapper[4833]: I0217 13:45:53.304628 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:53 crc kubenswrapper[4833]: I0217 13:45:53.304638 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:53Z","lastTransitionTime":"2026-02-17T13:45:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:53 crc kubenswrapper[4833]: I0217 13:45:53.407168 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:53 crc kubenswrapper[4833]: I0217 13:45:53.407218 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:53 crc kubenswrapper[4833]: I0217 13:45:53.407239 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:53 crc kubenswrapper[4833]: I0217 13:45:53.407269 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:53 crc kubenswrapper[4833]: I0217 13:45:53.407292 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:53Z","lastTransitionTime":"2026-02-17T13:45:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:53 crc kubenswrapper[4833]: I0217 13:45:53.510386 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:53 crc kubenswrapper[4833]: I0217 13:45:53.510442 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:53 crc kubenswrapper[4833]: I0217 13:45:53.510459 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:53 crc kubenswrapper[4833]: I0217 13:45:53.510481 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:53 crc kubenswrapper[4833]: I0217 13:45:53.510503 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:53Z","lastTransitionTime":"2026-02-17T13:45:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:53 crc kubenswrapper[4833]: I0217 13:45:53.613916 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:53 crc kubenswrapper[4833]: I0217 13:45:53.613988 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:53 crc kubenswrapper[4833]: I0217 13:45:53.614007 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:53 crc kubenswrapper[4833]: I0217 13:45:53.614033 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:53 crc kubenswrapper[4833]: I0217 13:45:53.614110 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:53Z","lastTransitionTime":"2026-02-17T13:45:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:53 crc kubenswrapper[4833]: I0217 13:45:53.716959 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:53 crc kubenswrapper[4833]: I0217 13:45:53.717076 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:53 crc kubenswrapper[4833]: I0217 13:45:53.717106 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:53 crc kubenswrapper[4833]: I0217 13:45:53.717132 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:53 crc kubenswrapper[4833]: I0217 13:45:53.717149 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:53Z","lastTransitionTime":"2026-02-17T13:45:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:53 crc kubenswrapper[4833]: I0217 13:45:53.820597 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:53 crc kubenswrapper[4833]: I0217 13:45:53.820666 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:53 crc kubenswrapper[4833]: I0217 13:45:53.820706 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:53 crc kubenswrapper[4833]: I0217 13:45:53.820737 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:53 crc kubenswrapper[4833]: I0217 13:45:53.820759 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:53Z","lastTransitionTime":"2026-02-17T13:45:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:53 crc kubenswrapper[4833]: I0217 13:45:53.923665 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:53 crc kubenswrapper[4833]: I0217 13:45:53.923765 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:53 crc kubenswrapper[4833]: I0217 13:45:53.923787 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:53 crc kubenswrapper[4833]: I0217 13:45:53.923847 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:53 crc kubenswrapper[4833]: I0217 13:45:53.923866 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:53Z","lastTransitionTime":"2026-02-17T13:45:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:54 crc kubenswrapper[4833]: I0217 13:45:54.008599 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 03:20:27.797644004 +0000 UTC Feb 17 13:45:54 crc kubenswrapper[4833]: I0217 13:45:54.027271 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:54 crc kubenswrapper[4833]: I0217 13:45:54.027334 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:54 crc kubenswrapper[4833]: I0217 13:45:54.027347 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:54 crc kubenswrapper[4833]: I0217 13:45:54.027365 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:54 crc kubenswrapper[4833]: I0217 13:45:54.027381 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:54Z","lastTransitionTime":"2026-02-17T13:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:54 crc kubenswrapper[4833]: I0217 13:45:54.040598 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:45:54 crc kubenswrapper[4833]: I0217 13:45:54.040627 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:45:54 crc kubenswrapper[4833]: E0217 13:45:54.040682 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:45:54 crc kubenswrapper[4833]: I0217 13:45:54.040768 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b7xf" Feb 17 13:45:54 crc kubenswrapper[4833]: E0217 13:45:54.040938 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:45:54 crc kubenswrapper[4833]: E0217 13:45:54.041101 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b7xf" podUID="892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c" Feb 17 13:45:54 crc kubenswrapper[4833]: I0217 13:45:54.130031 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:54 crc kubenswrapper[4833]: I0217 13:45:54.130114 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:54 crc kubenswrapper[4833]: I0217 13:45:54.130158 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:54 crc kubenswrapper[4833]: I0217 13:45:54.130186 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:54 crc kubenswrapper[4833]: I0217 13:45:54.130207 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:54Z","lastTransitionTime":"2026-02-17T13:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:54 crc kubenswrapper[4833]: I0217 13:45:54.234980 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:54 crc kubenswrapper[4833]: I0217 13:45:54.235070 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:54 crc kubenswrapper[4833]: I0217 13:45:54.235087 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:54 crc kubenswrapper[4833]: I0217 13:45:54.235109 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:54 crc kubenswrapper[4833]: I0217 13:45:54.235130 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:54Z","lastTransitionTime":"2026-02-17T13:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:54 crc kubenswrapper[4833]: I0217 13:45:54.337404 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:54 crc kubenswrapper[4833]: I0217 13:45:54.337739 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:54 crc kubenswrapper[4833]: I0217 13:45:54.337833 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:54 crc kubenswrapper[4833]: I0217 13:45:54.337916 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:54 crc kubenswrapper[4833]: I0217 13:45:54.337994 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:54Z","lastTransitionTime":"2026-02-17T13:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:54 crc kubenswrapper[4833]: I0217 13:45:54.440612 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:54 crc kubenswrapper[4833]: I0217 13:45:54.440680 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:54 crc kubenswrapper[4833]: I0217 13:45:54.440702 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:54 crc kubenswrapper[4833]: I0217 13:45:54.440731 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:54 crc kubenswrapper[4833]: I0217 13:45:54.440756 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:54Z","lastTransitionTime":"2026-02-17T13:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:54 crc kubenswrapper[4833]: I0217 13:45:54.543235 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:54 crc kubenswrapper[4833]: I0217 13:45:54.543302 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:54 crc kubenswrapper[4833]: I0217 13:45:54.543315 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:54 crc kubenswrapper[4833]: I0217 13:45:54.543396 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:54 crc kubenswrapper[4833]: I0217 13:45:54.543411 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:54Z","lastTransitionTime":"2026-02-17T13:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:54 crc kubenswrapper[4833]: I0217 13:45:54.645836 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:54 crc kubenswrapper[4833]: I0217 13:45:54.645926 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:54 crc kubenswrapper[4833]: I0217 13:45:54.645944 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:54 crc kubenswrapper[4833]: I0217 13:45:54.645959 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:54 crc kubenswrapper[4833]: I0217 13:45:54.645969 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:54Z","lastTransitionTime":"2026-02-17T13:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:54 crc kubenswrapper[4833]: I0217 13:45:54.748908 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:54 crc kubenswrapper[4833]: I0217 13:45:54.748948 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:54 crc kubenswrapper[4833]: I0217 13:45:54.748959 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:54 crc kubenswrapper[4833]: I0217 13:45:54.749001 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:54 crc kubenswrapper[4833]: I0217 13:45:54.749014 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:54Z","lastTransitionTime":"2026-02-17T13:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:54 crc kubenswrapper[4833]: I0217 13:45:54.851169 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:54 crc kubenswrapper[4833]: I0217 13:45:54.851221 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:54 crc kubenswrapper[4833]: I0217 13:45:54.851233 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:54 crc kubenswrapper[4833]: I0217 13:45:54.851248 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:54 crc kubenswrapper[4833]: I0217 13:45:54.851261 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:54Z","lastTransitionTime":"2026-02-17T13:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:54 crc kubenswrapper[4833]: I0217 13:45:54.954328 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:54 crc kubenswrapper[4833]: I0217 13:45:54.954385 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:54 crc kubenswrapper[4833]: I0217 13:45:54.954402 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:54 crc kubenswrapper[4833]: I0217 13:45:54.954425 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:54 crc kubenswrapper[4833]: I0217 13:45:54.954442 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:54Z","lastTransitionTime":"2026-02-17T13:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:55 crc kubenswrapper[4833]: I0217 13:45:55.009431 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 21:42:58.42949147 +0000 UTC Feb 17 13:45:55 crc kubenswrapper[4833]: I0217 13:45:55.041566 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:45:55 crc kubenswrapper[4833]: E0217 13:45:55.041770 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:45:55 crc kubenswrapper[4833]: I0217 13:45:55.056991 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:55 crc kubenswrapper[4833]: I0217 13:45:55.057089 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:55 crc kubenswrapper[4833]: I0217 13:45:55.057103 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:55 crc kubenswrapper[4833]: I0217 13:45:55.057130 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:55 crc kubenswrapper[4833]: I0217 13:45:55.057147 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:55Z","lastTransitionTime":"2026-02-17T13:45:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:55 crc kubenswrapper[4833]: I0217 13:45:55.160900 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:55 crc kubenswrapper[4833]: I0217 13:45:55.160954 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:55 crc kubenswrapper[4833]: I0217 13:45:55.160966 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:55 crc kubenswrapper[4833]: I0217 13:45:55.160987 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:55 crc kubenswrapper[4833]: I0217 13:45:55.161004 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:55Z","lastTransitionTime":"2026-02-17T13:45:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:55 crc kubenswrapper[4833]: I0217 13:45:55.263870 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:55 crc kubenswrapper[4833]: I0217 13:45:55.263932 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:55 crc kubenswrapper[4833]: I0217 13:45:55.263941 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:55 crc kubenswrapper[4833]: I0217 13:45:55.263975 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:55 crc kubenswrapper[4833]: I0217 13:45:55.263993 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:55Z","lastTransitionTime":"2026-02-17T13:45:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:55 crc kubenswrapper[4833]: I0217 13:45:55.367589 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:55 crc kubenswrapper[4833]: I0217 13:45:55.367680 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:55 crc kubenswrapper[4833]: I0217 13:45:55.367705 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:55 crc kubenswrapper[4833]: I0217 13:45:55.367739 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:55 crc kubenswrapper[4833]: I0217 13:45:55.367760 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:55Z","lastTransitionTime":"2026-02-17T13:45:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:55 crc kubenswrapper[4833]: I0217 13:45:55.470736 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:55 crc kubenswrapper[4833]: I0217 13:45:55.470787 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:55 crc kubenswrapper[4833]: I0217 13:45:55.470801 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:55 crc kubenswrapper[4833]: I0217 13:45:55.470821 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:55 crc kubenswrapper[4833]: I0217 13:45:55.470834 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:55Z","lastTransitionTime":"2026-02-17T13:45:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:55 crc kubenswrapper[4833]: I0217 13:45:55.573443 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:55 crc kubenswrapper[4833]: I0217 13:45:55.573504 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:55 crc kubenswrapper[4833]: I0217 13:45:55.573518 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:55 crc kubenswrapper[4833]: I0217 13:45:55.573538 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:55 crc kubenswrapper[4833]: I0217 13:45:55.573842 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:55Z","lastTransitionTime":"2026-02-17T13:45:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:55 crc kubenswrapper[4833]: I0217 13:45:55.677180 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:55 crc kubenswrapper[4833]: I0217 13:45:55.677234 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:55 crc kubenswrapper[4833]: I0217 13:45:55.677247 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:55 crc kubenswrapper[4833]: I0217 13:45:55.677263 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:55 crc kubenswrapper[4833]: I0217 13:45:55.677275 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:55Z","lastTransitionTime":"2026-02-17T13:45:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:55 crc kubenswrapper[4833]: I0217 13:45:55.779611 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:55 crc kubenswrapper[4833]: I0217 13:45:55.779655 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:55 crc kubenswrapper[4833]: I0217 13:45:55.779666 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:55 crc kubenswrapper[4833]: I0217 13:45:55.779683 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:55 crc kubenswrapper[4833]: I0217 13:45:55.779696 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:55Z","lastTransitionTime":"2026-02-17T13:45:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:55 crc kubenswrapper[4833]: I0217 13:45:55.882576 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:55 crc kubenswrapper[4833]: I0217 13:45:55.882643 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:55 crc kubenswrapper[4833]: I0217 13:45:55.882655 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:55 crc kubenswrapper[4833]: I0217 13:45:55.882676 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:55 crc kubenswrapper[4833]: I0217 13:45:55.882692 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:55Z","lastTransitionTime":"2026-02-17T13:45:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:55 crc kubenswrapper[4833]: I0217 13:45:55.985988 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:55 crc kubenswrapper[4833]: I0217 13:45:55.986032 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:55 crc kubenswrapper[4833]: I0217 13:45:55.986073 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:55 crc kubenswrapper[4833]: I0217 13:45:55.986091 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:55 crc kubenswrapper[4833]: I0217 13:45:55.986105 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:55Z","lastTransitionTime":"2026-02-17T13:45:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.010380 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 17:40:40.35545078 +0000 UTC Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.041237 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.041419 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:45:56 crc kubenswrapper[4833]: E0217 13:45:56.041567 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.041659 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b7xf" Feb 17 13:45:56 crc kubenswrapper[4833]: E0217 13:45:56.041740 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:45:56 crc kubenswrapper[4833]: E0217 13:45:56.041844 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b7xf" podUID="892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.088526 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.088579 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.088592 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.088611 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.088621 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:56Z","lastTransitionTime":"2026-02-17T13:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.096276 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.098576 4833 scope.go:117] "RemoveContainer" containerID="4fd334c37221033a230de003b6ea003903bd63a0618fc37a6383705bd319bd65" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.190918 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.190987 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.191008 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.191140 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.191176 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:56Z","lastTransitionTime":"2026-02-17T13:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.294199 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.294257 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.294272 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.294294 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.294306 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:56Z","lastTransitionTime":"2026-02-17T13:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.322244 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7r9gt_72c5918a-056f-446c-b138-a1be7140a5b0/ovnkube-controller/1.log" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.326349 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" event={"ID":"72c5918a-056f-446c-b138-a1be7140a5b0","Type":"ContainerStarted","Data":"79b340d04500cf506784f0354e892c3ea66b73dcfd1d22b17e464cc29175b9e8"} Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.327705 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.342803 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23fd4be9-debc-405d-ac05-5a5160593231\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f18bb5a144ef41de921b7b92ee560b76e3f9f9a626be4d5638db46e4688d256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84373535efe637bbd5cf8649523941f0e73ad51f06dd45c913741e2a8e7b775f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090ad37f43c1a891b1a2922b2a367e1b52ce56fd6a53e9e749f819bf281136a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://436df1e343e5ca9347dd98b9da03c74816b0bc0ec6cecd94dc3b21c38eea9b51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babf66dd8ad93003e9766e885c7626d4f6dab5bc46a6d714449a01065d7afea9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"message\\\":\\\"W0217 13:45:14.176886 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 13:45:14.177326 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771335914 cert, and key in /tmp/serving-cert-2529326345/serving-signer.crt, /tmp/serving-cert-2529326345/serving-signer.key\\\\nI0217 13:45:14.372413 1 observer_polling.go:159] Starting file observer\\\\nW0217 13:45:14.378661 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 13:45:14.378888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:45:14.382093 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529326345/tls.crt::/tmp/serving-cert-2529326345/tls.key\\\\\\\"\\\\nF0217 13:45:14.604297 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0450fd6122e12877e07084f693f67e63ac70bcecc962f7da5a32241ba14553\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:56Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.362269 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e08f74a-75d7-4dce-9297-27140e91962a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a2013b70df34194f895ea7c2e5f2650966fa2d06518733fa4411893e5b2b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbc961a623ca1f4237f238856a41acfb724aadd5fa3feda70410c352b6f750c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80bdd44d3b326c9bb61f21b221c63d4647e2cc6edc95fe568b90e76f4329cf1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9bc54d1057546b9c8d7d1527bb2ae93c1c713b4d6e5879477157af90fc0d89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e88eaef317b2beb0c55811634e6752439b7c7725072f55a42f5ab49c2ac2385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:56Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.379191 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:56Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.394103 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wlt4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d2bf39d36b6e8998e442f36f84198a363d7f22f56a05957a4a242459e6ff8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9dxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wlt4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:56Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.397285 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.397338 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.397349 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.397366 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.397376 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:56Z","lastTransitionTime":"2026-02-17T13:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.415055 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72c5918a-056f-446c-b138-a1be7140a5b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0c4e266b444d8ac1a67f9b5a6705f4b871e3930daf7707cd30b589a4cce24ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a89a54d9bedad8afe45f34ce0bc2324d0ca8590c328aad9d1557b2d92e5b81fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5c74f469d2d07cfd265ee1f625b957efc0f9d1f0efd5f483c962abdfd66a080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ee6e033fa9b29f59b186e39ec83d162b40e36aab28bb1e568a54c351a8c0c79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9da7ba3cbbca328926f2372a7e581793373ec876c961f7988617c668d958b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42e357a961b833cb840c8b33ce0db65d57cc5fb37789b00a6bcd3eb67d8e33f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79b340d04500cf506784f0354e892c3ea66b73dcfd1d22b17e464cc29175b9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fd334c37221033a230de003b6ea003903bd63a0618fc37a6383705bd319bd65\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"message\\\":\\\"enshift-authentication-operator/metrics]} name:Service_openshift-authentication-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.150:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6ea1fd71-2b40-4361-92ee-3f1ab4ec7414}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 13:45:44.455563 6278 services_controller.go:443] Built service openshift-machine-api/machine-api-operator-machine-webhook LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.250\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0217 13:45:44.455576 6278 services_controller.go:444] Built service openshift-machine-api/machine-api-operator-machine-webhook LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0217 13:45:44.455585 6278 services_controller.go:445] Built service openshift-machine-api/machine-api-operator-machine-webhook LB template configs for network=default: []services.\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fdc1a29a9f5b3be51beb5d28d7a09f671e4f9effeb9ed122da196da8623abe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7r9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:56Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.426594 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5gxjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03c9f579-21ad-4977-a5e8-db9272a08557\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7400b87694dd74f41819e23ce3f617021e028cdcbfdad65b2213249f35d176d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s6vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5gxjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:56Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.438834 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xwfsh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"962a90ec-217a-4df0-8d83-2e6663953088\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cf73503bb300f6f71385f8da17c04a4b5d84691d5d7ca3b363b2335e4905c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srrv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc181ab232df3bc23da46eabfb43d64f7c4e674152c27338d8ff5faf1c477bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srrv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xwfsh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:56Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.451406 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4b7xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft275\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft275\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4b7xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:56Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.464894 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c1afc7-6071-464b-aedc-11234d869afe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e36c4174a69c22c608db515a3fe558ec0292c53ae56a43a945c8bb19bf4cdf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bb5bc7e6f182dd6271a6d16d8441e1f6833a32fd03858a3b91d16a896339c2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e477bf1f25d554946b3657a5bd18f922b18483902fd4f9c05a452a299cb4d920\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2260fe18b8f9f829f7dde85c9bdfc32895c8bfe48e0986c8f8205708e078ff37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:56Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.479704 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a207ffb6a17f606fde32eec5ae4222e2fdf02d79f61e4f959be55814c14daf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:56Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.491599 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b9b39aa9ac4f718b75bfeffba8e09bf2fefca68625037f565f3b237b6275db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:56Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.500424 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.500473 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.500490 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.500511 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.500527 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:56Z","lastTransitionTime":"2026-02-17T13:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.505284 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vx9xx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d13dbe6c-a57b-4011-9987-193ccf4939f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6637ecd7299bd61ba9b47f093048b122c90fc98990eb22155e8467c1dfd75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w7kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vx9xx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:56Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.520082 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4a1ca83-1919-4f9c-82de-c849cbd50e70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4ea9f4f4051db850a31f5e1081095664c8764e1033e81823e75cd0072d13a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nq5ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89a1061eccd396579d2995d2d4ee3bd83f65f7e06a951b4d77897e687a7dcb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nq5ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nmzvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:56Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.538616 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wxvlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eb74ba2-a87a-415f-8978-8ef706346aa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f08f2aa3f0028f17b13b73c760f1471c3f335f3ae02be305bddd63552ec7330b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2569baf4273e04fbca0b38e0a65c945ba4b2767cba6f2b6647012328c041bab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2569baf4273e04fbca0b38e0a65c945ba4b2767cba6f2b6647012328c041bab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec44770454d862d13715225217505d12ffefd3f5a8dae0912a7ba8a9fbccb0bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec44770454d862d13715225217505d12ffefd3f5a8dae0912a7ba8a9fbccb0bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d28180d5fc08486a1492239ab1107c9b3856334091da9d2ad7ae0e6086fbd00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d28180d5fc08486a1492239ab1107c9b3856334091da9d2ad7ae0e6086fbd00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79641e6aa81b861f7dd3dadd0cc23f677727909d85f017c761bcbbb19450b5e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79641e6aa81b861f7dd3dadd0cc23f677727909d85f017c761bcbbb19450b5e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f4ea02b7d74057eddb0946756236afdb129b5ad326b153d66357ee42caca78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f4ea02b7d74057eddb0946756236afdb129b5ad326b153d66357ee42caca78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcd939d8a8deab6a94b6df237bd627850f5bb37bd29bd0988d16452dcbd87c9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcd939d8a8deab6a94b6df237bd627850f5bb37bd29bd0988d16452dcbd87c9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wxvlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:56Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.555434 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:56Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.572313 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e5a2da13ff704ed5e89100bd3a88a6349d446c2af48f88acb399efb43dc42d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac53462f7a94ff04dc94b5065169da7ec43985a5a7e7dba779f23614b61b78fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:56Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.588837 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:56Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.603941 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.603981 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.603994 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.604012 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.604024 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:56Z","lastTransitionTime":"2026-02-17T13:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.682789 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.682830 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.682839 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.682853 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.682861 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:56Z","lastTransitionTime":"2026-02-17T13:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:56 crc kubenswrapper[4833]: E0217 13:45:56.706301 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:45:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:45:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:45:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:45:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"858a93ea-15f0-4bac-8fa3-badb79f68871\\\",\\\"systemUUID\\\":\\\"648b67a2-27e7-447a-8ad2-7acc2e737df4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:56Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.710999 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.711058 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.711072 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.711093 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.711103 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:56Z","lastTransitionTime":"2026-02-17T13:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:56 crc kubenswrapper[4833]: E0217 13:45:56.723260 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:45:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:45:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:45:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:45:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"858a93ea-15f0-4bac-8fa3-badb79f68871\\\",\\\"systemUUID\\\":\\\"648b67a2-27e7-447a-8ad2-7acc2e737df4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:56Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.726901 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.726964 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.726985 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.727005 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.727017 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:56Z","lastTransitionTime":"2026-02-17T13:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:56 crc kubenswrapper[4833]: E0217 13:45:56.739755 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:45:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:45:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:45:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:45:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"858a93ea-15f0-4bac-8fa3-badb79f68871\\\",\\\"systemUUID\\\":\\\"648b67a2-27e7-447a-8ad2-7acc2e737df4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:56Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.743173 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.743209 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.743218 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.743233 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.743247 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:56Z","lastTransitionTime":"2026-02-17T13:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:56 crc kubenswrapper[4833]: E0217 13:45:56.755155 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:45:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:45:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:45:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:45:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"858a93ea-15f0-4bac-8fa3-badb79f68871\\\",\\\"systemUUID\\\":\\\"648b67a2-27e7-447a-8ad2-7acc2e737df4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:56Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.758095 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.758133 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.758148 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.758166 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.758179 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:56Z","lastTransitionTime":"2026-02-17T13:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:56 crc kubenswrapper[4833]: E0217 13:45:56.768014 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:45:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:45:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:45:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:45:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"858a93ea-15f0-4bac-8fa3-badb79f68871\\\",\\\"systemUUID\\\":\\\"648b67a2-27e7-447a-8ad2-7acc2e737df4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:56Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:56 crc kubenswrapper[4833]: E0217 13:45:56.768423 4833 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.773396 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.773459 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.773472 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.773495 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.773507 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:56Z","lastTransitionTime":"2026-02-17T13:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.875411 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.875457 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.875472 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.875489 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.875501 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:56Z","lastTransitionTime":"2026-02-17T13:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.977476 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.977509 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.977517 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.977552 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:56 crc kubenswrapper[4833]: I0217 13:45:56.977562 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:56Z","lastTransitionTime":"2026-02-17T13:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:57 crc kubenswrapper[4833]: I0217 13:45:57.011276 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 18:04:33.897723121 +0000 UTC Feb 17 13:45:57 crc kubenswrapper[4833]: I0217 13:45:57.040684 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:45:57 crc kubenswrapper[4833]: E0217 13:45:57.040866 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:45:57 crc kubenswrapper[4833]: I0217 13:45:57.081398 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:57 crc kubenswrapper[4833]: I0217 13:45:57.081450 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:57 crc kubenswrapper[4833]: I0217 13:45:57.081459 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:57 crc kubenswrapper[4833]: I0217 13:45:57.081481 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:57 crc kubenswrapper[4833]: I0217 13:45:57.081497 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:57Z","lastTransitionTime":"2026-02-17T13:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:57 crc kubenswrapper[4833]: I0217 13:45:57.184192 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:57 crc kubenswrapper[4833]: I0217 13:45:57.184263 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:57 crc kubenswrapper[4833]: I0217 13:45:57.184276 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:57 crc kubenswrapper[4833]: I0217 13:45:57.184293 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:57 crc kubenswrapper[4833]: I0217 13:45:57.184327 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:57Z","lastTransitionTime":"2026-02-17T13:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:57 crc kubenswrapper[4833]: I0217 13:45:57.287029 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:57 crc kubenswrapper[4833]: I0217 13:45:57.287118 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:57 crc kubenswrapper[4833]: I0217 13:45:57.287135 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:57 crc kubenswrapper[4833]: I0217 13:45:57.287159 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:57 crc kubenswrapper[4833]: I0217 13:45:57.287176 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:57Z","lastTransitionTime":"2026-02-17T13:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:57 crc kubenswrapper[4833]: I0217 13:45:57.332217 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7r9gt_72c5918a-056f-446c-b138-a1be7140a5b0/ovnkube-controller/2.log" Feb 17 13:45:57 crc kubenswrapper[4833]: I0217 13:45:57.333208 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7r9gt_72c5918a-056f-446c-b138-a1be7140a5b0/ovnkube-controller/1.log" Feb 17 13:45:57 crc kubenswrapper[4833]: I0217 13:45:57.337021 4833 generic.go:334] "Generic (PLEG): container finished" podID="72c5918a-056f-446c-b138-a1be7140a5b0" containerID="79b340d04500cf506784f0354e892c3ea66b73dcfd1d22b17e464cc29175b9e8" exitCode=1 Feb 17 13:45:57 crc kubenswrapper[4833]: I0217 13:45:57.337093 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" event={"ID":"72c5918a-056f-446c-b138-a1be7140a5b0","Type":"ContainerDied","Data":"79b340d04500cf506784f0354e892c3ea66b73dcfd1d22b17e464cc29175b9e8"} Feb 17 13:45:57 crc kubenswrapper[4833]: I0217 13:45:57.337137 4833 scope.go:117] "RemoveContainer" containerID="4fd334c37221033a230de003b6ea003903bd63a0618fc37a6383705bd319bd65" Feb 17 13:45:57 crc kubenswrapper[4833]: I0217 13:45:57.338471 4833 scope.go:117] "RemoveContainer" containerID="79b340d04500cf506784f0354e892c3ea66b73dcfd1d22b17e464cc29175b9e8" Feb 17 13:45:57 crc kubenswrapper[4833]: E0217 13:45:57.338881 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7r9gt_openshift-ovn-kubernetes(72c5918a-056f-446c-b138-a1be7140a5b0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" podUID="72c5918a-056f-446c-b138-a1be7140a5b0" Feb 17 13:45:57 crc kubenswrapper[4833]: I0217 13:45:57.360550 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:57Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:57 crc kubenswrapper[4833]: I0217 13:45:57.376896 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e5a2da13ff704ed5e89100bd3a88a6349d446c2af48f88acb399efb43dc42d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac53462f7a94ff04dc94b5065169da7ec43985a5a7e7dba779f23614b61b78fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:57Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:57 crc kubenswrapper[4833]: I0217 13:45:57.389083 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:57 crc kubenswrapper[4833]: I0217 13:45:57.389308 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:57 crc kubenswrapper[4833]: I0217 13:45:57.389488 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:57 crc kubenswrapper[4833]: I0217 13:45:57.389644 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:57 crc kubenswrapper[4833]: I0217 13:45:57.389786 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:57Z","lastTransitionTime":"2026-02-17T13:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:57 crc kubenswrapper[4833]: I0217 13:45:57.390841 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:57Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:57 crc kubenswrapper[4833]: I0217 13:45:57.404219 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:57Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:57 crc kubenswrapper[4833]: I0217 13:45:57.443057 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wlt4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d2bf39d36b6e8998e442f36f84198a363d7f22f56a05957a4a242459e6ff8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9dxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wlt4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:57Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:57 crc kubenswrapper[4833]: I0217 13:45:57.472141 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72c5918a-056f-446c-b138-a1be7140a5b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0c4e266b444d8ac1a67f9b5a6705f4b871e3930daf7707cd30b589a4cce24ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a89a54d9bedad8afe45f34ce0bc2324d0ca8590c328aad9d1557b2d92e5b81fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5c74f469d2d07cfd265ee1f625b957efc0f9d1f0efd5f483c962abdfd66a080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ee6e033fa9b29f59b186e39ec83d162b40e36aab28bb1e568a54c351a8c0c79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9da7ba3cbbca328926f2372a7e581793373ec876c961f7988617c668d958b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42e357a961b833cb840c8b33ce0db65d57cc5fb37789b00a6bcd3eb67d8e33f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79b340d04500cf506784f0354e892c3ea66b73dcfd1d22b17e464cc29175b9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fd334c37221033a230de003b6ea003903bd63a0618fc37a6383705bd319bd65\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"message\\\":\\\"enshift-authentication-operator/metrics]} name:Service_openshift-authentication-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.150:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6ea1fd71-2b40-4361-92ee-3f1ab4ec7414}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 13:45:44.455563 6278 services_controller.go:443] Built service openshift-machine-api/machine-api-operator-machine-webhook LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.250\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0217 13:45:44.455576 6278 services_controller.go:444] Built service openshift-machine-api/machine-api-operator-machine-webhook LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0217 13:45:44.455585 6278 services_controller.go:445] Built service openshift-machine-api/machine-api-operator-machine-webhook LB template configs for network=default: []services.\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79b340d04500cf506784f0354e892c3ea66b73dcfd1d22b17e464cc29175b9e8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T13:45:57Z\\\",\\\"message\\\":\\\"*v1.Pod openshift-etcd/etcd-crc\\\\nI0217 13:45:57.058971 6481 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"54fbe873-7e6d-475f-a0ad-8dd5f06d850d\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/cluster-autoscaler-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/cluster-autoscaler-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.245\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.245\\\\\\\", Port:9192, Template:(*services.Template)(nil\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fdc1a29a9f5b3be51beb5d28d7a09f671e4f9effeb9ed122da196da8623abe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7r9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:57Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:57 crc kubenswrapper[4833]: I0217 13:45:57.487183 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23fd4be9-debc-405d-ac05-5a5160593231\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f18bb5a144ef41de921b7b92ee560b76e3f9f9a626be4d5638db46e4688d256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84373535efe637bbd5cf8649523941f0e73ad51f06dd45c913741e2a8e7b775f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090ad37f43c1a891b1a2922b2a367e1b52ce56fd6a53e9e749f819bf281136a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://436df1e343e5ca9347dd98b9da03c74816b0bc0ec6cecd94dc3b21c38eea9b51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babf66dd8ad93003e9766e885c7626d4f6dab5bc46a6d714449a01065d7afea9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"message\\\":\\\"W0217 13:45:14.176886 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 13:45:14.177326 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771335914 cert, and key in /tmp/serving-cert-2529326345/serving-signer.crt, /tmp/serving-cert-2529326345/serving-signer.key\\\\nI0217 13:45:14.372413 1 observer_polling.go:159] Starting file observer\\\\nW0217 13:45:14.378661 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 13:45:14.378888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:45:14.382093 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529326345/tls.crt::/tmp/serving-cert-2529326345/tls.key\\\\\\\"\\\\nF0217 13:45:14.604297 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0450fd6122e12877e07084f693f67e63ac70bcecc962f7da5a32241ba14553\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:57Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:57 crc kubenswrapper[4833]: I0217 13:45:57.492010 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:57 crc kubenswrapper[4833]: I0217 13:45:57.492088 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:57 crc kubenswrapper[4833]: I0217 13:45:57.492108 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:57 crc kubenswrapper[4833]: I0217 13:45:57.492129 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:57 crc kubenswrapper[4833]: I0217 13:45:57.492145 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:57Z","lastTransitionTime":"2026-02-17T13:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:57 crc kubenswrapper[4833]: I0217 13:45:57.512018 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e08f74a-75d7-4dce-9297-27140e91962a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a2013b70df34194f895ea7c2e5f2650966fa2d06518733fa4411893e5b2b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbc961a623ca1f4237f238856a41acfb724aadd5fa3feda70410c352b6f750c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80bdd44d3b326c9bb61f21b221c63d4647e2cc6edc95fe568b90e76f4329cf1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9bc54d1057546b9c8d7d1527bb2ae93c1c713b4d6e5879477157af90fc0d89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e88eaef317b2beb0c55811634e6752439b7c7725072f55a42f5ab49c2ac2385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:57Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:57 crc kubenswrapper[4833]: I0217 13:45:57.524016 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5gxjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03c9f579-21ad-4977-a5e8-db9272a08557\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7400b87694dd74f41819e23ce3f617021e028cdcbfdad65b2213249f35d176d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s6vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5gxjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:57Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:57 crc kubenswrapper[4833]: I0217 13:45:57.536113 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xwfsh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"962a90ec-217a-4df0-8d83-2e6663953088\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cf73503bb300f6f71385f8da17c04a4b5d84691d5d7ca3b363b2335e4905c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srrv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc181ab232df3bc23da46eabfb43d64f7c4e674152c27338d8ff5faf1c477bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srrv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xwfsh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:57Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:57 crc kubenswrapper[4833]: I0217 13:45:57.549388 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4b7xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft275\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft275\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4b7xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:57Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:57 crc kubenswrapper[4833]: I0217 13:45:57.562420 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b9b39aa9ac4f718b75bfeffba8e09bf2fefca68625037f565f3b237b6275db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:57Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:57 crc kubenswrapper[4833]: I0217 13:45:57.573001 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vx9xx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d13dbe6c-a57b-4011-9987-193ccf4939f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6637ecd7299bd61ba9b47f093048b122c90fc98990eb22155e8467c1dfd75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w7kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vx9xx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:57Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:57 crc kubenswrapper[4833]: I0217 13:45:57.588003 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4a1ca83-1919-4f9c-82de-c849cbd50e70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4ea9f4f4051db850a31f5e1081095664c8764e1033e81823e75cd0072d13a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nq5ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89a1061eccd396579d2995d2d4ee3bd83f65f7e06a951b4d77897e687a7dcb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nq5ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nmzvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:57Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:57 crc kubenswrapper[4833]: I0217 13:45:57.594309 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:57 crc kubenswrapper[4833]: I0217 13:45:57.594355 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:57 crc kubenswrapper[4833]: I0217 13:45:57.594364 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:57 crc kubenswrapper[4833]: I0217 13:45:57.594379 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:57 crc kubenswrapper[4833]: I0217 13:45:57.594391 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:57Z","lastTransitionTime":"2026-02-17T13:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:57 crc kubenswrapper[4833]: I0217 13:45:57.604206 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wxvlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eb74ba2-a87a-415f-8978-8ef706346aa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f08f2aa3f0028f17b13b73c760f1471c3f335f3ae02be305bddd63552ec7330b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2569baf4273e04fbca0b38e0a65c945ba4b2767cba6f2b6647012328c041bab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2569baf4273e04fbca0b38e0a65c945ba4b2767cba6f2b6647012328c041bab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec44770454d862d13715225217505d12ffefd3f5a8dae0912a7ba8a9fbccb0bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec44770454d862d13715225217505d12ffefd3f5a8dae0912a7ba8a9fbccb0bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d28180d5fc08486a1492239ab1107c9b3856334091da9d2ad7ae0e6086fbd00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d28180d5fc08486a1492239ab1107c9b3856334091da9d2ad7ae0e6086fbd00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79641e6aa81b861f7dd3dadd0cc23f677727909d85f017c761bcbbb19450b5e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79641e6aa81b861f7dd3dadd0cc23f677727909d85f017c761bcbbb19450b5e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f4ea02b7d74057eddb0946756236afdb129b5ad326b153d66357ee42caca78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f4ea02b7d74057eddb0946756236afdb129b5ad326b153d66357ee42caca78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcd939d8a8deab6a94b6df237bd627850f5bb37bd29bd0988d16452dcbd87c9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcd939d8a8deab6a94b6df237bd627850f5bb37bd29bd0988d16452dcbd87c9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wxvlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:57Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:57 crc kubenswrapper[4833]: I0217 13:45:57.618203 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c1afc7-6071-464b-aedc-11234d869afe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e36c4174a69c22c608db515a3fe558ec0292c53ae56a43a945c8bb19bf4cdf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bb5bc7e6f182dd6271a6d16d8441e1f6833a32fd03858a3b91d16a896339c2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e477bf1f25d554946b3657a5bd18f922b18483902fd4f9c05a452a299cb4d920\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2260fe18b8f9f829f7dde85c9bdfc32895c8bfe48e0986c8f8205708e078ff37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:57Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:57 crc kubenswrapper[4833]: I0217 13:45:57.632591 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a207ffb6a17f606fde32eec5ae4222e2fdf02d79f61e4f959be55814c14daf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:57Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:57 crc kubenswrapper[4833]: I0217 13:45:57.696910 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:57 crc kubenswrapper[4833]: I0217 13:45:57.696959 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:57 crc kubenswrapper[4833]: I0217 13:45:57.696969 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:57 crc kubenswrapper[4833]: I0217 13:45:57.696983 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:57 crc kubenswrapper[4833]: I0217 13:45:57.696993 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:57Z","lastTransitionTime":"2026-02-17T13:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:57 crc kubenswrapper[4833]: I0217 13:45:57.799845 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:57 crc kubenswrapper[4833]: I0217 13:45:57.799895 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:57 crc kubenswrapper[4833]: I0217 13:45:57.799907 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:57 crc kubenswrapper[4833]: I0217 13:45:57.799924 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:57 crc kubenswrapper[4833]: I0217 13:45:57.799935 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:57Z","lastTransitionTime":"2026-02-17T13:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:57 crc kubenswrapper[4833]: I0217 13:45:57.904116 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:57 crc kubenswrapper[4833]: I0217 13:45:57.904161 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:57 crc kubenswrapper[4833]: I0217 13:45:57.904182 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:57 crc kubenswrapper[4833]: I0217 13:45:57.904201 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:57 crc kubenswrapper[4833]: I0217 13:45:57.904214 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:57Z","lastTransitionTime":"2026-02-17T13:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:58 crc kubenswrapper[4833]: I0217 13:45:58.006952 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:58 crc kubenswrapper[4833]: I0217 13:45:58.006994 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:58 crc kubenswrapper[4833]: I0217 13:45:58.007007 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:58 crc kubenswrapper[4833]: I0217 13:45:58.007023 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:58 crc kubenswrapper[4833]: I0217 13:45:58.007035 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:58Z","lastTransitionTime":"2026-02-17T13:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:58 crc kubenswrapper[4833]: I0217 13:45:58.012376 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 16:49:30.602073406 +0000 UTC Feb 17 13:45:58 crc kubenswrapper[4833]: I0217 13:45:58.040807 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:45:58 crc kubenswrapper[4833]: I0217 13:45:58.040894 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b7xf" Feb 17 13:45:58 crc kubenswrapper[4833]: I0217 13:45:58.040812 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:45:58 crc kubenswrapper[4833]: E0217 13:45:58.040967 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:45:58 crc kubenswrapper[4833]: E0217 13:45:58.041157 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:45:58 crc kubenswrapper[4833]: E0217 13:45:58.041230 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b7xf" podUID="892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c" Feb 17 13:45:58 crc kubenswrapper[4833]: I0217 13:45:58.109677 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:58 crc kubenswrapper[4833]: I0217 13:45:58.109739 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:58 crc kubenswrapper[4833]: I0217 13:45:58.109762 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:58 crc kubenswrapper[4833]: I0217 13:45:58.109789 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:58 crc kubenswrapper[4833]: I0217 13:45:58.109809 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:58Z","lastTransitionTime":"2026-02-17T13:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:58 crc kubenswrapper[4833]: I0217 13:45:58.211938 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:58 crc kubenswrapper[4833]: I0217 13:45:58.211980 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:58 crc kubenswrapper[4833]: I0217 13:45:58.211989 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:58 crc kubenswrapper[4833]: I0217 13:45:58.212005 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:58 crc kubenswrapper[4833]: I0217 13:45:58.212015 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:58Z","lastTransitionTime":"2026-02-17T13:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:58 crc kubenswrapper[4833]: I0217 13:45:58.315054 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:58 crc kubenswrapper[4833]: I0217 13:45:58.315097 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:58 crc kubenswrapper[4833]: I0217 13:45:58.315107 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:58 crc kubenswrapper[4833]: I0217 13:45:58.315120 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:58 crc kubenswrapper[4833]: I0217 13:45:58.315132 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:58Z","lastTransitionTime":"2026-02-17T13:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:58 crc kubenswrapper[4833]: I0217 13:45:58.341668 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7r9gt_72c5918a-056f-446c-b138-a1be7140a5b0/ovnkube-controller/2.log" Feb 17 13:45:58 crc kubenswrapper[4833]: I0217 13:45:58.345806 4833 scope.go:117] "RemoveContainer" containerID="79b340d04500cf506784f0354e892c3ea66b73dcfd1d22b17e464cc29175b9e8" Feb 17 13:45:58 crc kubenswrapper[4833]: E0217 13:45:58.346004 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7r9gt_openshift-ovn-kubernetes(72c5918a-056f-446c-b138-a1be7140a5b0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" podUID="72c5918a-056f-446c-b138-a1be7140a5b0" Feb 17 13:45:58 crc kubenswrapper[4833]: I0217 13:45:58.361682 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23fd4be9-debc-405d-ac05-5a5160593231\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f18bb5a144ef41de921b7b92ee560b76e3f9f9a626be4d5638db46e4688d256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84373535efe637bbd5cf8649523941f0e73ad51f06dd45c913741e2a8e7b775f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090ad37f43c1a891b1a2922b2a367e1b52ce56fd6a53e9e749f819bf281136a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://436df1e343e5ca9347dd98b9da03c74816b0bc0ec6cecd94dc3b21c38eea9b51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babf66dd8ad93003e9766e885c7626d4f6dab5bc46a6d714449a01065d7afea9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"message\\\":\\\"W0217 13:45:14.176886 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 13:45:14.177326 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771335914 cert, and key in /tmp/serving-cert-2529326345/serving-signer.crt, /tmp/serving-cert-2529326345/serving-signer.key\\\\nI0217 13:45:14.372413 1 observer_polling.go:159] Starting file observer\\\\nW0217 13:45:14.378661 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 13:45:14.378888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:45:14.382093 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529326345/tls.crt::/tmp/serving-cert-2529326345/tls.key\\\\\\\"\\\\nF0217 13:45:14.604297 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0450fd6122e12877e07084f693f67e63ac70bcecc962f7da5a32241ba14553\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:58Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:58 crc kubenswrapper[4833]: I0217 13:45:58.382260 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e08f74a-75d7-4dce-9297-27140e91962a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a2013b70df34194f895ea7c2e5f2650966fa2d06518733fa4411893e5b2b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbc961a623ca1f4237f238856a41acfb724aadd5fa3feda70410c352b6f750c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80bdd44d3b326c9bb61f21b221c63d4647e2cc6edc95fe568b90e76f4329cf1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9bc54d1057546b9c8d7d1527bb2ae93c1c713b4d6e5879477157af90fc0d89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e88eaef317b2beb0c55811634e6752439b7c7725072f55a42f5ab49c2ac2385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:58Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:58 crc kubenswrapper[4833]: I0217 13:45:58.397481 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:58Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:58 crc kubenswrapper[4833]: I0217 13:45:58.411056 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wlt4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d2bf39d36b6e8998e442f36f84198a363d7f22f56a05957a4a242459e6ff8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9dxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wlt4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:58Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:58 crc kubenswrapper[4833]: I0217 13:45:58.417428 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:58 crc kubenswrapper[4833]: I0217 13:45:58.417465 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:58 crc kubenswrapper[4833]: I0217 13:45:58.417478 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:58 crc kubenswrapper[4833]: I0217 13:45:58.417496 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:58 crc kubenswrapper[4833]: I0217 13:45:58.417509 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:58Z","lastTransitionTime":"2026-02-17T13:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:58 crc kubenswrapper[4833]: I0217 13:45:58.431868 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72c5918a-056f-446c-b138-a1be7140a5b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0c4e266b444d8ac1a67f9b5a6705f4b871e3930daf7707cd30b589a4cce24ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a89a54d9bedad8afe45f34ce0bc2324d0ca8590c328aad9d1557b2d92e5b81fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5c74f469d2d07cfd265ee1f625b957efc0f9d1f0efd5f483c962abdfd66a080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ee6e033fa9b29f59b186e39ec83d162b40e36aab28bb1e568a54c351a8c0c79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9da7ba3cbbca328926f2372a7e581793373ec876c961f7988617c668d958b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42e357a961b833cb840c8b33ce0db65d57cc5fb37789b00a6bcd3eb67d8e33f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79b340d04500cf506784f0354e892c3ea66b73dcfd1d22b17e464cc29175b9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79b340d04500cf506784f0354e892c3ea66b73dcfd1d22b17e464cc29175b9e8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T13:45:57Z\\\",\\\"message\\\":\\\"*v1.Pod openshift-etcd/etcd-crc\\\\nI0217 13:45:57.058971 6481 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"54fbe873-7e6d-475f-a0ad-8dd5f06d850d\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/cluster-autoscaler-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/cluster-autoscaler-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.245\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.245\\\\\\\", Port:9192, Template:(*services.Template)(nil\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7r9gt_openshift-ovn-kubernetes(72c5918a-056f-446c-b138-a1be7140a5b0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fdc1a29a9f5b3be51beb5d28d7a09f671e4f9effeb9ed122da196da8623abe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7r9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:58Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:58 crc kubenswrapper[4833]: I0217 13:45:58.442374 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5gxjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03c9f579-21ad-4977-a5e8-db9272a08557\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7400b87694dd74f41819e23ce3f617021e028cdcbfdad65b2213249f35d176d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s6vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5gxjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:58Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:58 crc kubenswrapper[4833]: I0217 13:45:58.458882 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xwfsh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"962a90ec-217a-4df0-8d83-2e6663953088\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cf73503bb300f6f71385f8da17c04a4b5d84691d5d7ca3b363b2335e4905c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srrv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc181ab232df3bc23da46eabfb43d64f7c4e674152c27338d8ff5faf1c477bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srrv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xwfsh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:58Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:58 crc kubenswrapper[4833]: I0217 13:45:58.473178 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4b7xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft275\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft275\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4b7xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:58Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:58 crc kubenswrapper[4833]: I0217 13:45:58.487798 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c1afc7-6071-464b-aedc-11234d869afe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e36c4174a69c22c608db515a3fe558ec0292c53ae56a43a945c8bb19bf4cdf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bb5bc7e6f182dd6271a6d16d8441e1f6833a32fd03858a3b91d16a896339c2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e477bf1f25d554946b3657a5bd18f922b18483902fd4f9c05a452a299cb4d920\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2260fe18b8f9f829f7dde85c9bdfc32895c8bfe48e0986c8f8205708e078ff37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:58Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:58 crc kubenswrapper[4833]: I0217 13:45:58.504256 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a207ffb6a17f606fde32eec5ae4222e2fdf02d79f61e4f959be55814c14daf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:58Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:58 crc kubenswrapper[4833]: I0217 13:45:58.519900 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:58 crc kubenswrapper[4833]: I0217 13:45:58.519953 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:58 crc kubenswrapper[4833]: I0217 13:45:58.519965 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:58 crc kubenswrapper[4833]: I0217 13:45:58.519982 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:58 crc kubenswrapper[4833]: I0217 13:45:58.519995 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:58Z","lastTransitionTime":"2026-02-17T13:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:58 crc kubenswrapper[4833]: I0217 13:45:58.522828 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b9b39aa9ac4f718b75bfeffba8e09bf2fefca68625037f565f3b237b6275db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:58Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:58 crc kubenswrapper[4833]: I0217 13:45:58.537902 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vx9xx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d13dbe6c-a57b-4011-9987-193ccf4939f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6637ecd7299bd61ba9b47f093048b122c90fc98990eb22155e8467c1dfd75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w7kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vx9xx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:58Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:58 crc kubenswrapper[4833]: I0217 13:45:58.549546 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4a1ca83-1919-4f9c-82de-c849cbd50e70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4ea9f4f4051db850a31f5e1081095664c8764e1033e81823e75cd0072d13a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nq5ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89a1061eccd396579d2995d2d4ee3bd83f65f7e06a951b4d77897e687a7dcb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nq5ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nmzvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:58Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:58 crc kubenswrapper[4833]: I0217 13:45:58.572230 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wxvlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eb74ba2-a87a-415f-8978-8ef706346aa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f08f2aa3f0028f17b13b73c760f1471c3f335f3ae02be305bddd63552ec7330b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2569baf4273e04fbca0b38e0a65c945ba4b2767cba6f2b6647012328c041bab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2569baf4273e04fbca0b38e0a65c945ba4b2767cba6f2b6647012328c041bab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec44770454d862d13715225217505d12ffefd3f5a8dae0912a7ba8a9fbccb0bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec44770454d862d13715225217505d12ffefd3f5a8dae0912a7ba8a9fbccb0bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d28180d5fc08486a1492239ab1107c9b3856334091da9d2ad7ae0e6086fbd00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d28180d5fc08486a1492239ab1107c9b3856334091da9d2ad7ae0e6086fbd00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79641e6aa81b861f7dd3dadd0cc23f677727909d85f017c761bcbbb19450b5e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79641e6aa81b861f7dd3dadd0cc23f677727909d85f017c761bcbbb19450b5e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f4ea02b7d74057eddb0946756236afdb129b5ad326b153d66357ee42caca78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f4ea02b7d74057eddb0946756236afdb129b5ad326b153d66357ee42caca78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcd939d8a8deab6a94b6df237bd627850f5bb37bd29bd0988d16452dcbd87c9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcd939d8a8deab6a94b6df237bd627850f5bb37bd29bd0988d16452dcbd87c9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wxvlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:58Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:58 crc kubenswrapper[4833]: I0217 13:45:58.585754 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:58Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:58 crc kubenswrapper[4833]: I0217 13:45:58.601240 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e5a2da13ff704ed5e89100bd3a88a6349d446c2af48f88acb399efb43dc42d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac53462f7a94ff04dc94b5065169da7ec43985a5a7e7dba779f23614b61b78fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:58Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:58 crc kubenswrapper[4833]: I0217 13:45:58.612723 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:45:58Z is after 2025-08-24T17:21:41Z" Feb 17 13:45:58 crc kubenswrapper[4833]: I0217 13:45:58.622520 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:58 crc kubenswrapper[4833]: I0217 13:45:58.622834 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:58 crc kubenswrapper[4833]: I0217 13:45:58.622937 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:58 crc kubenswrapper[4833]: I0217 13:45:58.623064 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:58 crc kubenswrapper[4833]: I0217 13:45:58.623167 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:58Z","lastTransitionTime":"2026-02-17T13:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:58 crc kubenswrapper[4833]: I0217 13:45:58.725651 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:58 crc kubenswrapper[4833]: I0217 13:45:58.725929 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:58 crc kubenswrapper[4833]: I0217 13:45:58.726030 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:58 crc kubenswrapper[4833]: I0217 13:45:58.726165 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:58 crc kubenswrapper[4833]: I0217 13:45:58.726256 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:58Z","lastTransitionTime":"2026-02-17T13:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:58 crc kubenswrapper[4833]: I0217 13:45:58.828968 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:58 crc kubenswrapper[4833]: I0217 13:45:58.829277 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:58 crc kubenswrapper[4833]: I0217 13:45:58.829375 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:58 crc kubenswrapper[4833]: I0217 13:45:58.829505 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:58 crc kubenswrapper[4833]: I0217 13:45:58.829627 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:58Z","lastTransitionTime":"2026-02-17T13:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:58 crc kubenswrapper[4833]: I0217 13:45:58.931990 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:58 crc kubenswrapper[4833]: I0217 13:45:58.932032 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:58 crc kubenswrapper[4833]: I0217 13:45:58.932056 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:58 crc kubenswrapper[4833]: I0217 13:45:58.932078 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:58 crc kubenswrapper[4833]: I0217 13:45:58.932087 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:58Z","lastTransitionTime":"2026-02-17T13:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:59 crc kubenswrapper[4833]: I0217 13:45:59.013106 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 19:16:26.604912215 +0000 UTC Feb 17 13:45:59 crc kubenswrapper[4833]: I0217 13:45:59.034912 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:59 crc kubenswrapper[4833]: I0217 13:45:59.034959 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:59 crc kubenswrapper[4833]: I0217 13:45:59.034977 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:59 crc kubenswrapper[4833]: I0217 13:45:59.035000 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:59 crc kubenswrapper[4833]: I0217 13:45:59.035017 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:59Z","lastTransitionTime":"2026-02-17T13:45:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:59 crc kubenswrapper[4833]: I0217 13:45:59.041309 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:45:59 crc kubenswrapper[4833]: E0217 13:45:59.041471 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:45:59 crc kubenswrapper[4833]: I0217 13:45:59.137942 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:59 crc kubenswrapper[4833]: I0217 13:45:59.137984 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:59 crc kubenswrapper[4833]: I0217 13:45:59.137994 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:59 crc kubenswrapper[4833]: I0217 13:45:59.138008 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:59 crc kubenswrapper[4833]: I0217 13:45:59.138020 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:59Z","lastTransitionTime":"2026-02-17T13:45:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:59 crc kubenswrapper[4833]: I0217 13:45:59.239995 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:59 crc kubenswrapper[4833]: I0217 13:45:59.240029 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:59 crc kubenswrapper[4833]: I0217 13:45:59.240061 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:59 crc kubenswrapper[4833]: I0217 13:45:59.240078 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:59 crc kubenswrapper[4833]: I0217 13:45:59.240089 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:59Z","lastTransitionTime":"2026-02-17T13:45:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:59 crc kubenswrapper[4833]: I0217 13:45:59.341968 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:59 crc kubenswrapper[4833]: I0217 13:45:59.342018 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:59 crc kubenswrapper[4833]: I0217 13:45:59.342031 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:59 crc kubenswrapper[4833]: I0217 13:45:59.342064 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:59 crc kubenswrapper[4833]: I0217 13:45:59.342077 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:59Z","lastTransitionTime":"2026-02-17T13:45:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:59 crc kubenswrapper[4833]: I0217 13:45:59.444981 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:59 crc kubenswrapper[4833]: I0217 13:45:59.445026 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:59 crc kubenswrapper[4833]: I0217 13:45:59.445071 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:59 crc kubenswrapper[4833]: I0217 13:45:59.445094 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:59 crc kubenswrapper[4833]: I0217 13:45:59.445111 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:59Z","lastTransitionTime":"2026-02-17T13:45:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:59 crc kubenswrapper[4833]: I0217 13:45:59.548294 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:59 crc kubenswrapper[4833]: I0217 13:45:59.548381 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:59 crc kubenswrapper[4833]: I0217 13:45:59.548403 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:59 crc kubenswrapper[4833]: I0217 13:45:59.548432 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:59 crc kubenswrapper[4833]: I0217 13:45:59.548457 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:59Z","lastTransitionTime":"2026-02-17T13:45:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:59 crc kubenswrapper[4833]: I0217 13:45:59.651192 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:59 crc kubenswrapper[4833]: I0217 13:45:59.651241 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:59 crc kubenswrapper[4833]: I0217 13:45:59.651251 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:59 crc kubenswrapper[4833]: I0217 13:45:59.651266 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:59 crc kubenswrapper[4833]: I0217 13:45:59.651288 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:59Z","lastTransitionTime":"2026-02-17T13:45:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:59 crc kubenswrapper[4833]: I0217 13:45:59.754205 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:59 crc kubenswrapper[4833]: I0217 13:45:59.754266 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:59 crc kubenswrapper[4833]: I0217 13:45:59.754276 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:59 crc kubenswrapper[4833]: I0217 13:45:59.754298 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:59 crc kubenswrapper[4833]: I0217 13:45:59.754310 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:59Z","lastTransitionTime":"2026-02-17T13:45:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:59 crc kubenswrapper[4833]: I0217 13:45:59.856173 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:59 crc kubenswrapper[4833]: I0217 13:45:59.856211 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:59 crc kubenswrapper[4833]: I0217 13:45:59.856222 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:59 crc kubenswrapper[4833]: I0217 13:45:59.856239 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:59 crc kubenswrapper[4833]: I0217 13:45:59.856251 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:59Z","lastTransitionTime":"2026-02-17T13:45:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:45:59 crc kubenswrapper[4833]: I0217 13:45:59.959257 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:45:59 crc kubenswrapper[4833]: I0217 13:45:59.959317 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:45:59 crc kubenswrapper[4833]: I0217 13:45:59.959334 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:45:59 crc kubenswrapper[4833]: I0217 13:45:59.959362 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:45:59 crc kubenswrapper[4833]: I0217 13:45:59.959381 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:45:59Z","lastTransitionTime":"2026-02-17T13:45:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:00 crc kubenswrapper[4833]: I0217 13:46:00.014112 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 23:24:11.614011174 +0000 UTC Feb 17 13:46:00 crc kubenswrapper[4833]: I0217 13:46:00.040908 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:46:00 crc kubenswrapper[4833]: I0217 13:46:00.040951 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:46:00 crc kubenswrapper[4833]: E0217 13:46:00.041029 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:46:00 crc kubenswrapper[4833]: I0217 13:46:00.040915 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b7xf" Feb 17 13:46:00 crc kubenswrapper[4833]: E0217 13:46:00.041219 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:46:00 crc kubenswrapper[4833]: E0217 13:46:00.041363 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b7xf" podUID="892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c" Feb 17 13:46:00 crc kubenswrapper[4833]: I0217 13:46:00.061975 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:00 crc kubenswrapper[4833]: I0217 13:46:00.062060 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:00 crc kubenswrapper[4833]: I0217 13:46:00.062075 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:00 crc kubenswrapper[4833]: I0217 13:46:00.062094 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:00 crc kubenswrapper[4833]: I0217 13:46:00.062106 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:00Z","lastTransitionTime":"2026-02-17T13:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:00 crc kubenswrapper[4833]: I0217 13:46:00.165090 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:00 crc kubenswrapper[4833]: I0217 13:46:00.165124 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:00 crc kubenswrapper[4833]: I0217 13:46:00.165136 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:00 crc kubenswrapper[4833]: I0217 13:46:00.165151 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:00 crc kubenswrapper[4833]: I0217 13:46:00.165164 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:00Z","lastTransitionTime":"2026-02-17T13:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:00 crc kubenswrapper[4833]: I0217 13:46:00.266995 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:00 crc kubenswrapper[4833]: I0217 13:46:00.267277 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:00 crc kubenswrapper[4833]: I0217 13:46:00.267380 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:00 crc kubenswrapper[4833]: I0217 13:46:00.267481 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:00 crc kubenswrapper[4833]: I0217 13:46:00.267567 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:00Z","lastTransitionTime":"2026-02-17T13:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:00 crc kubenswrapper[4833]: I0217 13:46:00.371008 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:00 crc kubenswrapper[4833]: I0217 13:46:00.371181 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:00 crc kubenswrapper[4833]: I0217 13:46:00.371206 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:00 crc kubenswrapper[4833]: I0217 13:46:00.371238 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:00 crc kubenswrapper[4833]: I0217 13:46:00.371259 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:00Z","lastTransitionTime":"2026-02-17T13:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:00 crc kubenswrapper[4833]: I0217 13:46:00.473902 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:00 crc kubenswrapper[4833]: I0217 13:46:00.473957 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:00 crc kubenswrapper[4833]: I0217 13:46:00.473974 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:00 crc kubenswrapper[4833]: I0217 13:46:00.473997 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:00 crc kubenswrapper[4833]: I0217 13:46:00.474013 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:00Z","lastTransitionTime":"2026-02-17T13:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:00 crc kubenswrapper[4833]: I0217 13:46:00.521668 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c-metrics-certs\") pod \"network-metrics-daemon-4b7xf\" (UID: \"892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c\") " pod="openshift-multus/network-metrics-daemon-4b7xf" Feb 17 13:46:00 crc kubenswrapper[4833]: E0217 13:46:00.521868 4833 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 13:46:00 crc kubenswrapper[4833]: E0217 13:46:00.522212 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c-metrics-certs podName:892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c nodeName:}" failed. No retries permitted until 2026-02-17 13:46:16.522191914 +0000 UTC m=+66.157291347 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c-metrics-certs") pod "network-metrics-daemon-4b7xf" (UID: "892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 13:46:00 crc kubenswrapper[4833]: I0217 13:46:00.577115 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:00 crc kubenswrapper[4833]: I0217 13:46:00.577165 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:00 crc kubenswrapper[4833]: I0217 13:46:00.577181 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:00 crc kubenswrapper[4833]: I0217 13:46:00.577203 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:00 crc kubenswrapper[4833]: I0217 13:46:00.577220 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:00Z","lastTransitionTime":"2026-02-17T13:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:00 crc kubenswrapper[4833]: I0217 13:46:00.680049 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:00 crc kubenswrapper[4833]: I0217 13:46:00.680093 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:00 crc kubenswrapper[4833]: I0217 13:46:00.680102 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:00 crc kubenswrapper[4833]: I0217 13:46:00.680118 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:00 crc kubenswrapper[4833]: I0217 13:46:00.680138 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:00Z","lastTransitionTime":"2026-02-17T13:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:00 crc kubenswrapper[4833]: I0217 13:46:00.783188 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:00 crc kubenswrapper[4833]: I0217 13:46:00.783238 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:00 crc kubenswrapper[4833]: I0217 13:46:00.783248 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:00 crc kubenswrapper[4833]: I0217 13:46:00.783261 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:00 crc kubenswrapper[4833]: I0217 13:46:00.783271 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:00Z","lastTransitionTime":"2026-02-17T13:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:00 crc kubenswrapper[4833]: I0217 13:46:00.885608 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:00 crc kubenswrapper[4833]: I0217 13:46:00.885648 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:00 crc kubenswrapper[4833]: I0217 13:46:00.885658 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:00 crc kubenswrapper[4833]: I0217 13:46:00.885676 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:00 crc kubenswrapper[4833]: I0217 13:46:00.885687 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:00Z","lastTransitionTime":"2026-02-17T13:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:00 crc kubenswrapper[4833]: I0217 13:46:00.989278 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:00 crc kubenswrapper[4833]: I0217 13:46:00.989329 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:00 crc kubenswrapper[4833]: I0217 13:46:00.989342 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:00 crc kubenswrapper[4833]: I0217 13:46:00.989360 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:00 crc kubenswrapper[4833]: I0217 13:46:00.989377 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:00Z","lastTransitionTime":"2026-02-17T13:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:01 crc kubenswrapper[4833]: I0217 13:46:01.014823 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 17:40:57.13552636 +0000 UTC Feb 17 13:46:01 crc kubenswrapper[4833]: I0217 13:46:01.041645 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:46:01 crc kubenswrapper[4833]: E0217 13:46:01.041899 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:46:01 crc kubenswrapper[4833]: I0217 13:46:01.065772 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23fd4be9-debc-405d-ac05-5a5160593231\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f18bb5a144ef41de921b7b92ee560b76e3f9f9a626be4d5638db46e4688d256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84373535efe637bbd5cf8649523941f0e73ad51f06dd45c913741e2a8e7b775f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090ad37f43c1a891b1a2922b2a367e1b52ce56fd6a53e9e749f819bf281136a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://436df1e343e5ca9347dd98b9da03c74816b0bc0ec6cecd94dc3b21c38eea9b51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babf66dd8ad93003e9766e885c7626d4f6dab5bc46a6d714449a01065d7afea9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"message\\\":\\\"W0217 13:45:14.176886 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 13:45:14.177326 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771335914 cert, and key in /tmp/serving-cert-2529326345/serving-signer.crt, /tmp/serving-cert-2529326345/serving-signer.key\\\\nI0217 13:45:14.372413 1 observer_polling.go:159] Starting file observer\\\\nW0217 13:45:14.378661 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 13:45:14.378888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:45:14.382093 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529326345/tls.crt::/tmp/serving-cert-2529326345/tls.key\\\\\\\"\\\\nF0217 13:45:14.604297 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0450fd6122e12877e07084f693f67e63ac70bcecc962f7da5a32241ba14553\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:01Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:01 crc kubenswrapper[4833]: I0217 13:46:01.092546 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:01 crc kubenswrapper[4833]: I0217 13:46:01.092599 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:01 crc kubenswrapper[4833]: I0217 13:46:01.092612 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:01 crc kubenswrapper[4833]: I0217 13:46:01.092633 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:01 crc kubenswrapper[4833]: I0217 13:46:01.092650 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:01Z","lastTransitionTime":"2026-02-17T13:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:01 crc kubenswrapper[4833]: I0217 13:46:01.100897 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e08f74a-75d7-4dce-9297-27140e91962a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a2013b70df34194f895ea7c2e5f2650966fa2d06518733fa4411893e5b2b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbc961a623ca1f4237f238856a41acfb724aadd5fa3feda70410c352b6f750c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80bdd44d3b326c9bb61f21b221c63d4647e2cc6edc95fe568b90e76f4329cf1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9bc54d1057546b9c8d7d1527bb2ae93c1c713b4d6e5879477157af90fc0d89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e88eaef317b2beb0c55811634e6752439b7c7725072f55a42f5ab49c2ac2385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:01Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:01 crc kubenswrapper[4833]: I0217 13:46:01.130555 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:01Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:01 crc kubenswrapper[4833]: I0217 13:46:01.155068 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wlt4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d2bf39d36b6e8998e442f36f84198a363d7f22f56a05957a4a242459e6ff8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9dxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wlt4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:01Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:01 crc kubenswrapper[4833]: I0217 13:46:01.184709 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72c5918a-056f-446c-b138-a1be7140a5b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0c4e266b444d8ac1a67f9b5a6705f4b871e3930daf7707cd30b589a4cce24ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a89a54d9bedad8afe45f34ce0bc2324d0ca8590c328aad9d1557b2d92e5b81fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5c74f469d2d07cfd265ee1f625b957efc0f9d1f0efd5f483c962abdfd66a080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ee6e033fa9b29f59b186e39ec83d162b40e36aab28bb1e568a54c351a8c0c79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9da7ba3cbbca328926f2372a7e581793373ec876c961f7988617c668d958b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42e357a961b833cb840c8b33ce0db65d57cc5fb37789b00a6bcd3eb67d8e33f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79b340d04500cf506784f0354e892c3ea66b73dcfd1d22b17e464cc29175b9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79b340d04500cf506784f0354e892c3ea66b73dcfd1d22b17e464cc29175b9e8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T13:45:57Z\\\",\\\"message\\\":\\\"*v1.Pod openshift-etcd/etcd-crc\\\\nI0217 13:45:57.058971 6481 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"54fbe873-7e6d-475f-a0ad-8dd5f06d850d\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/cluster-autoscaler-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/cluster-autoscaler-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.245\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.245\\\\\\\", Port:9192, Template:(*services.Template)(nil\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7r9gt_openshift-ovn-kubernetes(72c5918a-056f-446c-b138-a1be7140a5b0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fdc1a29a9f5b3be51beb5d28d7a09f671e4f9effeb9ed122da196da8623abe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7r9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:01Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:01 crc kubenswrapper[4833]: I0217 13:46:01.196888 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:01 crc kubenswrapper[4833]: I0217 13:46:01.197272 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:01 crc kubenswrapper[4833]: I0217 13:46:01.197361 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:01 crc kubenswrapper[4833]: I0217 13:46:01.197400 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:01 crc kubenswrapper[4833]: I0217 13:46:01.197413 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:01Z","lastTransitionTime":"2026-02-17T13:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:01 crc kubenswrapper[4833]: I0217 13:46:01.198931 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5gxjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03c9f579-21ad-4977-a5e8-db9272a08557\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7400b87694dd74f41819e23ce3f617021e028cdcbfdad65b2213249f35d176d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s6vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5gxjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:01Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:01 crc kubenswrapper[4833]: I0217 13:46:01.210957 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xwfsh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"962a90ec-217a-4df0-8d83-2e6663953088\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cf73503bb300f6f71385f8da17c04a4b5d84691d5d7ca3b363b2335e4905c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srrv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc181ab232df3bc23da46eabfb43d64f7c4e674152c27338d8ff5faf1c477bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srrv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xwfsh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:01Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:01 crc kubenswrapper[4833]: I0217 13:46:01.223955 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4b7xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft275\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft275\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4b7xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:01Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:01 crc kubenswrapper[4833]: I0217 13:46:01.241622 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c1afc7-6071-464b-aedc-11234d869afe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e36c4174a69c22c608db515a3fe558ec0292c53ae56a43a945c8bb19bf4cdf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bb5bc7e6f182dd6271a6d16d8441e1f6833a32fd03858a3b91d16a896339c2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e477bf1f25d554946b3657a5bd18f922b18483902fd4f9c05a452a299cb4d920\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2260fe18b8f9f829f7dde85c9bdfc32895c8bfe48e0986c8f8205708e078ff37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:01Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:01 crc kubenswrapper[4833]: I0217 13:46:01.256302 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a207ffb6a17f606fde32eec5ae4222e2fdf02d79f61e4f959be55814c14daf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:01Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:01 crc kubenswrapper[4833]: I0217 13:46:01.269785 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b9b39aa9ac4f718b75bfeffba8e09bf2fefca68625037f565f3b237b6275db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:01Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:01 crc kubenswrapper[4833]: I0217 13:46:01.283739 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vx9xx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d13dbe6c-a57b-4011-9987-193ccf4939f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6637ecd7299bd61ba9b47f093048b122c90fc98990eb22155e8467c1dfd75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w7kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vx9xx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:01Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:01 crc kubenswrapper[4833]: I0217 13:46:01.294669 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4a1ca83-1919-4f9c-82de-c849cbd50e70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4ea9f4f4051db850a31f5e1081095664c8764e1033e81823e75cd0072d13a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nq5ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89a1061eccd396579d2995d2d4ee3bd83f65f7e06a951b4d77897e687a7dcb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nq5ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nmzvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:01Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:01 crc kubenswrapper[4833]: I0217 13:46:01.299374 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:01 crc kubenswrapper[4833]: I0217 13:46:01.299407 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:01 crc kubenswrapper[4833]: I0217 13:46:01.299418 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:01 crc kubenswrapper[4833]: I0217 13:46:01.299430 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:01 crc kubenswrapper[4833]: I0217 13:46:01.299438 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:01Z","lastTransitionTime":"2026-02-17T13:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:01 crc kubenswrapper[4833]: I0217 13:46:01.307419 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wxvlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eb74ba2-a87a-415f-8978-8ef706346aa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f08f2aa3f0028f17b13b73c760f1471c3f335f3ae02be305bddd63552ec7330b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2569baf4273e04fbca0b38e0a65c945ba4b2767cba6f2b6647012328c041bab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2569baf4273e04fbca0b38e0a65c945ba4b2767cba6f2b6647012328c041bab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec44770454d862d13715225217505d12ffefd3f5a8dae0912a7ba8a9fbccb0bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec44770454d862d13715225217505d12ffefd3f5a8dae0912a7ba8a9fbccb0bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d28180d5fc08486a1492239ab1107c9b3856334091da9d2ad7ae0e6086fbd00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d28180d5fc08486a1492239ab1107c9b3856334091da9d2ad7ae0e6086fbd00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79641e6aa81b861f7dd3dadd0cc23f677727909d85f017c761bcbbb19450b5e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79641e6aa81b861f7dd3dadd0cc23f677727909d85f017c761bcbbb19450b5e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f4ea02b7d74057eddb0946756236afdb129b5ad326b153d66357ee42caca78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f4ea02b7d74057eddb0946756236afdb129b5ad326b153d66357ee42caca78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcd939d8a8deab6a94b6df237bd627850f5bb37bd29bd0988d16452dcbd87c9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcd939d8a8deab6a94b6df237bd627850f5bb37bd29bd0988d16452dcbd87c9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wxvlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:01Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:01 crc kubenswrapper[4833]: I0217 13:46:01.318474 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:01Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:01 crc kubenswrapper[4833]: I0217 13:46:01.332371 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e5a2da13ff704ed5e89100bd3a88a6349d446c2af48f88acb399efb43dc42d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac53462f7a94ff04dc94b5065169da7ec43985a5a7e7dba779f23614b61b78fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:01Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:01 crc kubenswrapper[4833]: I0217 13:46:01.345968 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:01Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:01 crc kubenswrapper[4833]: I0217 13:46:01.402287 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:01 crc kubenswrapper[4833]: I0217 13:46:01.402333 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:01 crc kubenswrapper[4833]: I0217 13:46:01.402347 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:01 crc kubenswrapper[4833]: I0217 13:46:01.402363 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:01 crc kubenswrapper[4833]: I0217 13:46:01.402374 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:01Z","lastTransitionTime":"2026-02-17T13:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:01 crc kubenswrapper[4833]: I0217 13:46:01.505265 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:01 crc kubenswrapper[4833]: I0217 13:46:01.505313 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:01 crc kubenswrapper[4833]: I0217 13:46:01.505328 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:01 crc kubenswrapper[4833]: I0217 13:46:01.505349 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:01 crc kubenswrapper[4833]: I0217 13:46:01.505364 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:01Z","lastTransitionTime":"2026-02-17T13:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:01 crc kubenswrapper[4833]: I0217 13:46:01.607819 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:01 crc kubenswrapper[4833]: I0217 13:46:01.607869 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:01 crc kubenswrapper[4833]: I0217 13:46:01.607883 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:01 crc kubenswrapper[4833]: I0217 13:46:01.607904 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:01 crc kubenswrapper[4833]: I0217 13:46:01.607918 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:01Z","lastTransitionTime":"2026-02-17T13:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:01 crc kubenswrapper[4833]: I0217 13:46:01.711509 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:01 crc kubenswrapper[4833]: I0217 13:46:01.711648 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:01 crc kubenswrapper[4833]: I0217 13:46:01.711680 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:01 crc kubenswrapper[4833]: I0217 13:46:01.711713 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:01 crc kubenswrapper[4833]: I0217 13:46:01.711736 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:01Z","lastTransitionTime":"2026-02-17T13:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:01 crc kubenswrapper[4833]: I0217 13:46:01.814657 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:01 crc kubenswrapper[4833]: I0217 13:46:01.814776 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:01 crc kubenswrapper[4833]: I0217 13:46:01.814798 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:01 crc kubenswrapper[4833]: I0217 13:46:01.814858 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:01 crc kubenswrapper[4833]: I0217 13:46:01.814879 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:01Z","lastTransitionTime":"2026-02-17T13:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:01 crc kubenswrapper[4833]: I0217 13:46:01.835587 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:46:01 crc kubenswrapper[4833]: E0217 13:46:01.835775 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:46:33.835744133 +0000 UTC m=+83.470843596 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:46:01 crc kubenswrapper[4833]: I0217 13:46:01.916999 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:01 crc kubenswrapper[4833]: I0217 13:46:01.917100 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:01 crc kubenswrapper[4833]: I0217 13:46:01.917127 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:01 crc kubenswrapper[4833]: I0217 13:46:01.917175 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:01 crc kubenswrapper[4833]: I0217 13:46:01.917190 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:01Z","lastTransitionTime":"2026-02-17T13:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:01 crc kubenswrapper[4833]: I0217 13:46:01.937019 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:46:01 crc kubenswrapper[4833]: I0217 13:46:01.937113 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:46:01 crc kubenswrapper[4833]: I0217 13:46:01.937146 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:46:01 crc kubenswrapper[4833]: I0217 13:46:01.937223 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:46:01 crc kubenswrapper[4833]: E0217 13:46:01.937382 4833 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 13:46:01 crc kubenswrapper[4833]: E0217 13:46:01.937471 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 13:46:33.937450496 +0000 UTC m=+83.572549939 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 13:46:01 crc kubenswrapper[4833]: E0217 13:46:01.937481 4833 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 13:46:01 crc kubenswrapper[4833]: E0217 13:46:01.937547 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 13:46:01 crc kubenswrapper[4833]: E0217 13:46:01.937598 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 13:46:01 crc kubenswrapper[4833]: E0217 13:46:01.937612 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 13:46:01 crc kubenswrapper[4833]: E0217 13:46:01.937633 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 13:46:01 crc kubenswrapper[4833]: E0217 13:46:01.937646 4833 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 13:46:01 crc kubenswrapper[4833]: E0217 13:46:01.937616 4833 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 13:46:01 crc kubenswrapper[4833]: E0217 13:46:01.937644 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 13:46:33.93761518 +0000 UTC m=+83.572714663 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 13:46:01 crc kubenswrapper[4833]: E0217 13:46:01.938613 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 13:46:33.938543342 +0000 UTC m=+83.573642835 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 13:46:01 crc kubenswrapper[4833]: E0217 13:46:01.938686 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 13:46:33.938667595 +0000 UTC m=+83.573767088 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 13:46:02 crc kubenswrapper[4833]: I0217 13:46:02.015800 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 01:50:07.448073104 +0000 UTC Feb 17 13:46:02 crc kubenswrapper[4833]: I0217 13:46:02.019589 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:02 crc kubenswrapper[4833]: I0217 13:46:02.019627 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:02 crc kubenswrapper[4833]: I0217 13:46:02.019825 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:02 crc kubenswrapper[4833]: I0217 13:46:02.019866 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:02 crc kubenswrapper[4833]: I0217 13:46:02.019881 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:02Z","lastTransitionTime":"2026-02-17T13:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:02 crc kubenswrapper[4833]: I0217 13:46:02.041242 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:46:02 crc kubenswrapper[4833]: I0217 13:46:02.041313 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b7xf" Feb 17 13:46:02 crc kubenswrapper[4833]: I0217 13:46:02.041346 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:46:02 crc kubenswrapper[4833]: E0217 13:46:02.041493 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:46:02 crc kubenswrapper[4833]: E0217 13:46:02.041889 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b7xf" podUID="892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c" Feb 17 13:46:02 crc kubenswrapper[4833]: E0217 13:46:02.041959 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:46:02 crc kubenswrapper[4833]: I0217 13:46:02.122732 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:02 crc kubenswrapper[4833]: I0217 13:46:02.122784 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:02 crc kubenswrapper[4833]: I0217 13:46:02.122796 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:02 crc kubenswrapper[4833]: I0217 13:46:02.122813 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:02 crc kubenswrapper[4833]: I0217 13:46:02.122825 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:02Z","lastTransitionTime":"2026-02-17T13:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:02 crc kubenswrapper[4833]: I0217 13:46:02.226152 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:02 crc kubenswrapper[4833]: I0217 13:46:02.226220 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:02 crc kubenswrapper[4833]: I0217 13:46:02.226238 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:02 crc kubenswrapper[4833]: I0217 13:46:02.226262 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:02 crc kubenswrapper[4833]: I0217 13:46:02.226281 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:02Z","lastTransitionTime":"2026-02-17T13:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:02 crc kubenswrapper[4833]: I0217 13:46:02.328815 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:02 crc kubenswrapper[4833]: I0217 13:46:02.328848 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:02 crc kubenswrapper[4833]: I0217 13:46:02.328855 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:02 crc kubenswrapper[4833]: I0217 13:46:02.328869 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:02 crc kubenswrapper[4833]: I0217 13:46:02.328879 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:02Z","lastTransitionTime":"2026-02-17T13:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:02 crc kubenswrapper[4833]: I0217 13:46:02.433536 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:02 crc kubenswrapper[4833]: I0217 13:46:02.433624 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:02 crc kubenswrapper[4833]: I0217 13:46:02.433650 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:02 crc kubenswrapper[4833]: I0217 13:46:02.433680 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:02 crc kubenswrapper[4833]: I0217 13:46:02.433714 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:02Z","lastTransitionTime":"2026-02-17T13:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:02 crc kubenswrapper[4833]: I0217 13:46:02.536067 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:02 crc kubenswrapper[4833]: I0217 13:46:02.536131 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:02 crc kubenswrapper[4833]: I0217 13:46:02.536147 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:02 crc kubenswrapper[4833]: I0217 13:46:02.536536 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:02 crc kubenswrapper[4833]: I0217 13:46:02.536615 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:02Z","lastTransitionTime":"2026-02-17T13:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:02 crc kubenswrapper[4833]: I0217 13:46:02.639989 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:02 crc kubenswrapper[4833]: I0217 13:46:02.640061 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:02 crc kubenswrapper[4833]: I0217 13:46:02.640087 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:02 crc kubenswrapper[4833]: I0217 13:46:02.640109 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:02 crc kubenswrapper[4833]: I0217 13:46:02.640123 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:02Z","lastTransitionTime":"2026-02-17T13:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:02 crc kubenswrapper[4833]: I0217 13:46:02.743159 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:02 crc kubenswrapper[4833]: I0217 13:46:02.743226 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:02 crc kubenswrapper[4833]: I0217 13:46:02.743239 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:02 crc kubenswrapper[4833]: I0217 13:46:02.743257 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:02 crc kubenswrapper[4833]: I0217 13:46:02.743269 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:02Z","lastTransitionTime":"2026-02-17T13:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:02 crc kubenswrapper[4833]: I0217 13:46:02.845671 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:02 crc kubenswrapper[4833]: I0217 13:46:02.845713 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:02 crc kubenswrapper[4833]: I0217 13:46:02.845724 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:02 crc kubenswrapper[4833]: I0217 13:46:02.845741 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:02 crc kubenswrapper[4833]: I0217 13:46:02.845755 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:02Z","lastTransitionTime":"2026-02-17T13:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:02 crc kubenswrapper[4833]: I0217 13:46:02.948144 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:02 crc kubenswrapper[4833]: I0217 13:46:02.948186 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:02 crc kubenswrapper[4833]: I0217 13:46:02.948198 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:02 crc kubenswrapper[4833]: I0217 13:46:02.948213 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:02 crc kubenswrapper[4833]: I0217 13:46:02.948225 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:02Z","lastTransitionTime":"2026-02-17T13:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:03 crc kubenswrapper[4833]: I0217 13:46:03.016637 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 23:38:42.449493966 +0000 UTC Feb 17 13:46:03 crc kubenswrapper[4833]: I0217 13:46:03.041079 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:46:03 crc kubenswrapper[4833]: E0217 13:46:03.041257 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:46:03 crc kubenswrapper[4833]: I0217 13:46:03.049627 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:03 crc kubenswrapper[4833]: I0217 13:46:03.049658 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:03 crc kubenswrapper[4833]: I0217 13:46:03.049666 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:03 crc kubenswrapper[4833]: I0217 13:46:03.049677 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:03 crc kubenswrapper[4833]: I0217 13:46:03.049688 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:03Z","lastTransitionTime":"2026-02-17T13:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:03 crc kubenswrapper[4833]: I0217 13:46:03.152762 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:03 crc kubenswrapper[4833]: I0217 13:46:03.152828 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:03 crc kubenswrapper[4833]: I0217 13:46:03.152849 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:03 crc kubenswrapper[4833]: I0217 13:46:03.152876 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:03 crc kubenswrapper[4833]: I0217 13:46:03.152898 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:03Z","lastTransitionTime":"2026-02-17T13:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:03 crc kubenswrapper[4833]: I0217 13:46:03.255653 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:03 crc kubenswrapper[4833]: I0217 13:46:03.255702 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:03 crc kubenswrapper[4833]: I0217 13:46:03.255719 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:03 crc kubenswrapper[4833]: I0217 13:46:03.255742 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:03 crc kubenswrapper[4833]: I0217 13:46:03.255758 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:03Z","lastTransitionTime":"2026-02-17T13:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:03 crc kubenswrapper[4833]: I0217 13:46:03.358239 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:03 crc kubenswrapper[4833]: I0217 13:46:03.358277 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:03 crc kubenswrapper[4833]: I0217 13:46:03.358288 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:03 crc kubenswrapper[4833]: I0217 13:46:03.358302 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:03 crc kubenswrapper[4833]: I0217 13:46:03.358311 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:03Z","lastTransitionTime":"2026-02-17T13:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:03 crc kubenswrapper[4833]: I0217 13:46:03.461367 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:03 crc kubenswrapper[4833]: I0217 13:46:03.461414 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:03 crc kubenswrapper[4833]: I0217 13:46:03.461424 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:03 crc kubenswrapper[4833]: I0217 13:46:03.461439 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:03 crc kubenswrapper[4833]: I0217 13:46:03.461448 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:03Z","lastTransitionTime":"2026-02-17T13:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:03 crc kubenswrapper[4833]: I0217 13:46:03.565701 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:03 crc kubenswrapper[4833]: I0217 13:46:03.565735 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:03 crc kubenswrapper[4833]: I0217 13:46:03.565746 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:03 crc kubenswrapper[4833]: I0217 13:46:03.565762 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:03 crc kubenswrapper[4833]: I0217 13:46:03.565774 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:03Z","lastTransitionTime":"2026-02-17T13:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:03 crc kubenswrapper[4833]: I0217 13:46:03.668476 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:03 crc kubenswrapper[4833]: I0217 13:46:03.668519 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:03 crc kubenswrapper[4833]: I0217 13:46:03.668530 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:03 crc kubenswrapper[4833]: I0217 13:46:03.668546 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:03 crc kubenswrapper[4833]: I0217 13:46:03.668558 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:03Z","lastTransitionTime":"2026-02-17T13:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:03 crc kubenswrapper[4833]: I0217 13:46:03.771713 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:03 crc kubenswrapper[4833]: I0217 13:46:03.771823 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:03 crc kubenswrapper[4833]: I0217 13:46:03.771861 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:03 crc kubenswrapper[4833]: I0217 13:46:03.771894 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:03 crc kubenswrapper[4833]: I0217 13:46:03.771922 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:03Z","lastTransitionTime":"2026-02-17T13:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:03 crc kubenswrapper[4833]: I0217 13:46:03.875032 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:03 crc kubenswrapper[4833]: I0217 13:46:03.875144 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:03 crc kubenswrapper[4833]: I0217 13:46:03.875166 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:03 crc kubenswrapper[4833]: I0217 13:46:03.875193 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:03 crc kubenswrapper[4833]: I0217 13:46:03.875216 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:03Z","lastTransitionTime":"2026-02-17T13:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:03 crc kubenswrapper[4833]: I0217 13:46:03.977843 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:03 crc kubenswrapper[4833]: I0217 13:46:03.977889 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:03 crc kubenswrapper[4833]: I0217 13:46:03.977914 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:03 crc kubenswrapper[4833]: I0217 13:46:03.977936 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:03 crc kubenswrapper[4833]: I0217 13:46:03.977950 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:03Z","lastTransitionTime":"2026-02-17T13:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:04 crc kubenswrapper[4833]: I0217 13:46:04.017669 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 17:58:21.35279534 +0000 UTC Feb 17 13:46:04 crc kubenswrapper[4833]: I0217 13:46:04.041434 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:46:04 crc kubenswrapper[4833]: E0217 13:46:04.041560 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:46:04 crc kubenswrapper[4833]: I0217 13:46:04.041454 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b7xf" Feb 17 13:46:04 crc kubenswrapper[4833]: E0217 13:46:04.041630 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b7xf" podUID="892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c" Feb 17 13:46:04 crc kubenswrapper[4833]: I0217 13:46:04.041430 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:46:04 crc kubenswrapper[4833]: E0217 13:46:04.041672 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:46:04 crc kubenswrapper[4833]: I0217 13:46:04.080827 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:04 crc kubenswrapper[4833]: I0217 13:46:04.080855 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:04 crc kubenswrapper[4833]: I0217 13:46:04.080881 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:04 crc kubenswrapper[4833]: I0217 13:46:04.080896 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:04 crc kubenswrapper[4833]: I0217 13:46:04.080906 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:04Z","lastTransitionTime":"2026-02-17T13:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:04 crc kubenswrapper[4833]: I0217 13:46:04.183120 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:04 crc kubenswrapper[4833]: I0217 13:46:04.183158 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:04 crc kubenswrapper[4833]: I0217 13:46:04.183168 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:04 crc kubenswrapper[4833]: I0217 13:46:04.183183 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:04 crc kubenswrapper[4833]: I0217 13:46:04.183194 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:04Z","lastTransitionTime":"2026-02-17T13:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:04 crc kubenswrapper[4833]: I0217 13:46:04.285307 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:04 crc kubenswrapper[4833]: I0217 13:46:04.285373 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:04 crc kubenswrapper[4833]: I0217 13:46:04.285390 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:04 crc kubenswrapper[4833]: I0217 13:46:04.285413 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:04 crc kubenswrapper[4833]: I0217 13:46:04.285429 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:04Z","lastTransitionTime":"2026-02-17T13:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:04 crc kubenswrapper[4833]: I0217 13:46:04.387282 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:04 crc kubenswrapper[4833]: I0217 13:46:04.387331 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:04 crc kubenswrapper[4833]: I0217 13:46:04.387344 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:04 crc kubenswrapper[4833]: I0217 13:46:04.387363 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:04 crc kubenswrapper[4833]: I0217 13:46:04.387378 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:04Z","lastTransitionTime":"2026-02-17T13:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:04 crc kubenswrapper[4833]: I0217 13:46:04.494465 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:04 crc kubenswrapper[4833]: I0217 13:46:04.494508 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:04 crc kubenswrapper[4833]: I0217 13:46:04.494523 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:04 crc kubenswrapper[4833]: I0217 13:46:04.494536 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:04 crc kubenswrapper[4833]: I0217 13:46:04.494545 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:04Z","lastTransitionTime":"2026-02-17T13:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:04 crc kubenswrapper[4833]: I0217 13:46:04.597092 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:04 crc kubenswrapper[4833]: I0217 13:46:04.597130 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:04 crc kubenswrapper[4833]: I0217 13:46:04.597139 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:04 crc kubenswrapper[4833]: I0217 13:46:04.597154 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:04 crc kubenswrapper[4833]: I0217 13:46:04.597163 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:04Z","lastTransitionTime":"2026-02-17T13:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:04 crc kubenswrapper[4833]: I0217 13:46:04.699733 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:04 crc kubenswrapper[4833]: I0217 13:46:04.699780 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:04 crc kubenswrapper[4833]: I0217 13:46:04.699789 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:04 crc kubenswrapper[4833]: I0217 13:46:04.699801 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:04 crc kubenswrapper[4833]: I0217 13:46:04.699810 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:04Z","lastTransitionTime":"2026-02-17T13:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:04 crc kubenswrapper[4833]: I0217 13:46:04.802841 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:04 crc kubenswrapper[4833]: I0217 13:46:04.803088 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:04 crc kubenswrapper[4833]: I0217 13:46:04.803102 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:04 crc kubenswrapper[4833]: I0217 13:46:04.803121 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:04 crc kubenswrapper[4833]: I0217 13:46:04.803134 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:04Z","lastTransitionTime":"2026-02-17T13:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:04 crc kubenswrapper[4833]: I0217 13:46:04.906519 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:04 crc kubenswrapper[4833]: I0217 13:46:04.906575 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:04 crc kubenswrapper[4833]: I0217 13:46:04.906592 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:04 crc kubenswrapper[4833]: I0217 13:46:04.906625 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:04 crc kubenswrapper[4833]: I0217 13:46:04.906642 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:04Z","lastTransitionTime":"2026-02-17T13:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:05 crc kubenswrapper[4833]: I0217 13:46:05.009097 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:05 crc kubenswrapper[4833]: I0217 13:46:05.009161 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:05 crc kubenswrapper[4833]: I0217 13:46:05.009178 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:05 crc kubenswrapper[4833]: I0217 13:46:05.009201 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:05 crc kubenswrapper[4833]: I0217 13:46:05.009219 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:05Z","lastTransitionTime":"2026-02-17T13:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:05 crc kubenswrapper[4833]: I0217 13:46:05.018286 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 17:12:16.795048663 +0000 UTC Feb 17 13:46:05 crc kubenswrapper[4833]: I0217 13:46:05.040952 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:46:05 crc kubenswrapper[4833]: E0217 13:46:05.041125 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:46:05 crc kubenswrapper[4833]: I0217 13:46:05.074704 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 13:46:05 crc kubenswrapper[4833]: I0217 13:46:05.089363 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 17 13:46:05 crc kubenswrapper[4833]: I0217 13:46:05.093875 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e5a2da13ff704ed5e89100bd3a88a6349d446c2af48f88acb399efb43dc42d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac53462f7a94ff04dc94b5065169da7ec43985a5a7e7dba779f23614b61b78fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:05Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:05 crc kubenswrapper[4833]: I0217 13:46:05.111652 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:05Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:05 crc kubenswrapper[4833]: I0217 13:46:05.111972 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:05 crc kubenswrapper[4833]: I0217 13:46:05.111997 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:05 crc kubenswrapper[4833]: I0217 13:46:05.112009 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:05 crc kubenswrapper[4833]: I0217 13:46:05.112025 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:05 crc kubenswrapper[4833]: I0217 13:46:05.112087 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:05Z","lastTransitionTime":"2026-02-17T13:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:05 crc kubenswrapper[4833]: I0217 13:46:05.124282 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:05Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:05 crc kubenswrapper[4833]: I0217 13:46:05.154209 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e08f74a-75d7-4dce-9297-27140e91962a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a2013b70df34194f895ea7c2e5f2650966fa2d06518733fa4411893e5b2b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbc961a623ca1f4237f238856a41acfb724aadd5fa3feda70410c352b6f750c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80bdd44d3b326c9bb61f21b221c63d4647e2cc6edc95fe568b90e76f4329cf1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9bc54d1057546b9c8d7d1527bb2ae93c1c713b4d6e5879477157af90fc0d89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e88eaef317b2beb0c55811634e6752439b7c7725072f55a42f5ab49c2ac2385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:05Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:05 crc kubenswrapper[4833]: I0217 13:46:05.167165 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:05Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:05 crc kubenswrapper[4833]: I0217 13:46:05.179861 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wlt4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d2bf39d36b6e8998e442f36f84198a363d7f22f56a05957a4a242459e6ff8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9dxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wlt4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:05Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:05 crc kubenswrapper[4833]: I0217 13:46:05.195182 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72c5918a-056f-446c-b138-a1be7140a5b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0c4e266b444d8ac1a67f9b5a6705f4b871e3930daf7707cd30b589a4cce24ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a89a54d9bedad8afe45f34ce0bc2324d0ca8590c328aad9d1557b2d92e5b81fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5c74f469d2d07cfd265ee1f625b957efc0f9d1f0efd5f483c962abdfd66a080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ee6e033fa9b29f59b186e39ec83d162b40e36aab28bb1e568a54c351a8c0c79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9da7ba3cbbca328926f2372a7e581793373ec876c961f7988617c668d958b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42e357a961b833cb840c8b33ce0db65d57cc5fb37789b00a6bcd3eb67d8e33f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79b340d04500cf506784f0354e892c3ea66b73dcfd1d22b17e464cc29175b9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79b340d04500cf506784f0354e892c3ea66b73dcfd1d22b17e464cc29175b9e8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T13:45:57Z\\\",\\\"message\\\":\\\"*v1.Pod openshift-etcd/etcd-crc\\\\nI0217 13:45:57.058971 6481 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"54fbe873-7e6d-475f-a0ad-8dd5f06d850d\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/cluster-autoscaler-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/cluster-autoscaler-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.245\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.245\\\\\\\", Port:9192, Template:(*services.Template)(nil\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7r9gt_openshift-ovn-kubernetes(72c5918a-056f-446c-b138-a1be7140a5b0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fdc1a29a9f5b3be51beb5d28d7a09f671e4f9effeb9ed122da196da8623abe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7r9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:05Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:05 crc kubenswrapper[4833]: I0217 13:46:05.207335 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23fd4be9-debc-405d-ac05-5a5160593231\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f18bb5a144ef41de921b7b92ee560b76e3f9f9a626be4d5638db46e4688d256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84373535efe637bbd5cf8649523941f0e73ad51f06dd45c913741e2a8e7b775f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090ad37f43c1a891b1a2922b2a367e1b52ce56fd6a53e9e749f819bf281136a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://436df1e343e5ca9347dd98b9da03c74816b0bc0ec6cecd94dc3b21c38eea9b51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babf66dd8ad93003e9766e885c7626d4f6dab5bc46a6d714449a01065d7afea9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"message\\\":\\\"W0217 13:45:14.176886 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 13:45:14.177326 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771335914 cert, and key in /tmp/serving-cert-2529326345/serving-signer.crt, /tmp/serving-cert-2529326345/serving-signer.key\\\\nI0217 13:45:14.372413 1 observer_polling.go:159] Starting file observer\\\\nW0217 13:45:14.378661 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 13:45:14.378888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:45:14.382093 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529326345/tls.crt::/tmp/serving-cert-2529326345/tls.key\\\\\\\"\\\\nF0217 13:45:14.604297 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0450fd6122e12877e07084f693f67e63ac70bcecc962f7da5a32241ba14553\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:05Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:05 crc kubenswrapper[4833]: I0217 13:46:05.214585 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:05 crc kubenswrapper[4833]: I0217 13:46:05.214631 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:05 crc kubenswrapper[4833]: I0217 13:46:05.214643 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:05 crc kubenswrapper[4833]: I0217 13:46:05.214660 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:05 crc kubenswrapper[4833]: I0217 13:46:05.214673 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:05Z","lastTransitionTime":"2026-02-17T13:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:05 crc kubenswrapper[4833]: I0217 13:46:05.217532 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xwfsh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"962a90ec-217a-4df0-8d83-2e6663953088\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cf73503bb300f6f71385f8da17c04a4b5d84691d5d7ca3b363b2335e4905c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srrv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc181ab232df3bc23da46eabfb43d64f7c4e674152c27338d8ff5faf1c477bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srrv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xwfsh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:05Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:05 crc kubenswrapper[4833]: I0217 13:46:05.225549 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4b7xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft275\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft275\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4b7xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:05Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:05 crc kubenswrapper[4833]: I0217 13:46:05.234173 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5gxjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03c9f579-21ad-4977-a5e8-db9272a08557\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7400b87694dd74f41819e23ce3f617021e028cdcbfdad65b2213249f35d176d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s6vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5gxjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:05Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:05 crc kubenswrapper[4833]: I0217 13:46:05.244809 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c1afc7-6071-464b-aedc-11234d869afe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e36c4174a69c22c608db515a3fe558ec0292c53ae56a43a945c8bb19bf4cdf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bb5bc7e6f182dd6271a6d16d8441e1f6833a32fd03858a3b91d16a896339c2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e477bf1f25d554946b3657a5bd18f922b18483902fd4f9c05a452a299cb4d920\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2260fe18b8f9f829f7dde85c9bdfc32895c8bfe48e0986c8f8205708e078ff37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:05Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:05 crc kubenswrapper[4833]: I0217 13:46:05.256739 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a207ffb6a17f606fde32eec5ae4222e2fdf02d79f61e4f959be55814c14daf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:05Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:05 crc kubenswrapper[4833]: I0217 13:46:05.267095 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b9b39aa9ac4f718b75bfeffba8e09bf2fefca68625037f565f3b237b6275db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:05Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:05 crc kubenswrapper[4833]: I0217 13:46:05.274956 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vx9xx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d13dbe6c-a57b-4011-9987-193ccf4939f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6637ecd7299bd61ba9b47f093048b122c90fc98990eb22155e8467c1dfd75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w7kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vx9xx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:05Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:05 crc kubenswrapper[4833]: I0217 13:46:05.283811 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4a1ca83-1919-4f9c-82de-c849cbd50e70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4ea9f4f4051db850a31f5e1081095664c8764e1033e81823e75cd0072d13a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nq5ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89a1061eccd396579d2995d2d4ee3bd83f65f7e06a951b4d77897e687a7dcb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nq5ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nmzvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:05Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:05 crc kubenswrapper[4833]: I0217 13:46:05.295339 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wxvlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eb74ba2-a87a-415f-8978-8ef706346aa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f08f2aa3f0028f17b13b73c760f1471c3f335f3ae02be305bddd63552ec7330b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2569baf4273e04fbca0b38e0a65c945ba4b2767cba6f2b6647012328c041bab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2569baf4273e04fbca0b38e0a65c945ba4b2767cba6f2b6647012328c041bab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec44770454d862d13715225217505d12ffefd3f5a8dae0912a7ba8a9fbccb0bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec44770454d862d13715225217505d12ffefd3f5a8dae0912a7ba8a9fbccb0bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d28180d5fc08486a1492239ab1107c9b3856334091da9d2ad7ae0e6086fbd00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d28180d5fc08486a1492239ab1107c9b3856334091da9d2ad7ae0e6086fbd00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79641e6aa81b861f7dd3dadd0cc23f677727909d85f017c761bcbbb19450b5e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79641e6aa81b861f7dd3dadd0cc23f677727909d85f017c761bcbbb19450b5e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f4ea02b7d74057eddb0946756236afdb129b5ad326b153d66357ee42caca78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f4ea02b7d74057eddb0946756236afdb129b5ad326b153d66357ee42caca78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcd939d8a8deab6a94b6df237bd627850f5bb37bd29bd0988d16452dcbd87c9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcd939d8a8deab6a94b6df237bd627850f5bb37bd29bd0988d16452dcbd87c9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wxvlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:05Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:05 crc kubenswrapper[4833]: I0217 13:46:05.316730 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:05 crc kubenswrapper[4833]: I0217 13:46:05.316765 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:05 crc kubenswrapper[4833]: I0217 13:46:05.316777 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:05 crc kubenswrapper[4833]: I0217 13:46:05.316791 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:05 crc kubenswrapper[4833]: I0217 13:46:05.316801 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:05Z","lastTransitionTime":"2026-02-17T13:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:05 crc kubenswrapper[4833]: I0217 13:46:05.418918 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:05 crc kubenswrapper[4833]: I0217 13:46:05.418966 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:05 crc kubenswrapper[4833]: I0217 13:46:05.418976 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:05 crc kubenswrapper[4833]: I0217 13:46:05.418993 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:05 crc kubenswrapper[4833]: I0217 13:46:05.419006 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:05Z","lastTransitionTime":"2026-02-17T13:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:05 crc kubenswrapper[4833]: I0217 13:46:05.522200 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:05 crc kubenswrapper[4833]: I0217 13:46:05.522245 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:05 crc kubenswrapper[4833]: I0217 13:46:05.522254 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:05 crc kubenswrapper[4833]: I0217 13:46:05.522268 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:05 crc kubenswrapper[4833]: I0217 13:46:05.522277 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:05Z","lastTransitionTime":"2026-02-17T13:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:05 crc kubenswrapper[4833]: I0217 13:46:05.625294 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:05 crc kubenswrapper[4833]: I0217 13:46:05.625356 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:05 crc kubenswrapper[4833]: I0217 13:46:05.625373 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:05 crc kubenswrapper[4833]: I0217 13:46:05.625397 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:05 crc kubenswrapper[4833]: I0217 13:46:05.625415 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:05Z","lastTransitionTime":"2026-02-17T13:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:05 crc kubenswrapper[4833]: I0217 13:46:05.728716 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:05 crc kubenswrapper[4833]: I0217 13:46:05.728784 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:05 crc kubenswrapper[4833]: I0217 13:46:05.728797 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:05 crc kubenswrapper[4833]: I0217 13:46:05.728821 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:05 crc kubenswrapper[4833]: I0217 13:46:05.728837 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:05Z","lastTransitionTime":"2026-02-17T13:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:05 crc kubenswrapper[4833]: I0217 13:46:05.831773 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:05 crc kubenswrapper[4833]: I0217 13:46:05.831828 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:05 crc kubenswrapper[4833]: I0217 13:46:05.831837 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:05 crc kubenswrapper[4833]: I0217 13:46:05.831858 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:05 crc kubenswrapper[4833]: I0217 13:46:05.831870 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:05Z","lastTransitionTime":"2026-02-17T13:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:05 crc kubenswrapper[4833]: I0217 13:46:05.934286 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:05 crc kubenswrapper[4833]: I0217 13:46:05.934327 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:05 crc kubenswrapper[4833]: I0217 13:46:05.934341 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:05 crc kubenswrapper[4833]: I0217 13:46:05.934358 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:05 crc kubenswrapper[4833]: I0217 13:46:05.934368 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:05Z","lastTransitionTime":"2026-02-17T13:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:06 crc kubenswrapper[4833]: I0217 13:46:06.018445 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 07:41:06.496150902 +0000 UTC Feb 17 13:46:06 crc kubenswrapper[4833]: I0217 13:46:06.036233 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:06 crc kubenswrapper[4833]: I0217 13:46:06.036270 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:06 crc kubenswrapper[4833]: I0217 13:46:06.036279 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:06 crc kubenswrapper[4833]: I0217 13:46:06.036294 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:06 crc kubenswrapper[4833]: I0217 13:46:06.036308 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:06Z","lastTransitionTime":"2026-02-17T13:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:06 crc kubenswrapper[4833]: I0217 13:46:06.040582 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:46:06 crc kubenswrapper[4833]: I0217 13:46:06.040626 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:46:06 crc kubenswrapper[4833]: I0217 13:46:06.040601 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b7xf" Feb 17 13:46:06 crc kubenswrapper[4833]: E0217 13:46:06.040723 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:46:06 crc kubenswrapper[4833]: E0217 13:46:06.040814 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b7xf" podUID="892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c" Feb 17 13:46:06 crc kubenswrapper[4833]: E0217 13:46:06.040895 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:46:06 crc kubenswrapper[4833]: I0217 13:46:06.138940 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:06 crc kubenswrapper[4833]: I0217 13:46:06.138987 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:06 crc kubenswrapper[4833]: I0217 13:46:06.139003 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:06 crc kubenswrapper[4833]: I0217 13:46:06.139023 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:06 crc kubenswrapper[4833]: I0217 13:46:06.139064 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:06Z","lastTransitionTime":"2026-02-17T13:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:06 crc kubenswrapper[4833]: I0217 13:46:06.241420 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:06 crc kubenswrapper[4833]: I0217 13:46:06.241444 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:06 crc kubenswrapper[4833]: I0217 13:46:06.241453 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:06 crc kubenswrapper[4833]: I0217 13:46:06.241465 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:06 crc kubenswrapper[4833]: I0217 13:46:06.241474 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:06Z","lastTransitionTime":"2026-02-17T13:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:06 crc kubenswrapper[4833]: I0217 13:46:06.344014 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:06 crc kubenswrapper[4833]: I0217 13:46:06.344113 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:06 crc kubenswrapper[4833]: I0217 13:46:06.344135 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:06 crc kubenswrapper[4833]: I0217 13:46:06.344157 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:06 crc kubenswrapper[4833]: I0217 13:46:06.344175 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:06Z","lastTransitionTime":"2026-02-17T13:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:06 crc kubenswrapper[4833]: I0217 13:46:06.447329 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:06 crc kubenswrapper[4833]: I0217 13:46:06.447650 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:06 crc kubenswrapper[4833]: I0217 13:46:06.447773 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:06 crc kubenswrapper[4833]: I0217 13:46:06.447881 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:06 crc kubenswrapper[4833]: I0217 13:46:06.447993 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:06Z","lastTransitionTime":"2026-02-17T13:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:06 crc kubenswrapper[4833]: I0217 13:46:06.550843 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:06 crc kubenswrapper[4833]: I0217 13:46:06.551204 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:06 crc kubenswrapper[4833]: I0217 13:46:06.551313 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:06 crc kubenswrapper[4833]: I0217 13:46:06.551431 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:06 crc kubenswrapper[4833]: I0217 13:46:06.551606 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:06Z","lastTransitionTime":"2026-02-17T13:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:06 crc kubenswrapper[4833]: I0217 13:46:06.654495 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:06 crc kubenswrapper[4833]: I0217 13:46:06.654916 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:06 crc kubenswrapper[4833]: I0217 13:46:06.655446 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:06 crc kubenswrapper[4833]: I0217 13:46:06.655843 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:06 crc kubenswrapper[4833]: I0217 13:46:06.656413 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:06Z","lastTransitionTime":"2026-02-17T13:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:06 crc kubenswrapper[4833]: I0217 13:46:06.765307 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:06 crc kubenswrapper[4833]: I0217 13:46:06.765458 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:06 crc kubenswrapper[4833]: I0217 13:46:06.765475 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:06 crc kubenswrapper[4833]: I0217 13:46:06.765519 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:06 crc kubenswrapper[4833]: I0217 13:46:06.765534 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:06Z","lastTransitionTime":"2026-02-17T13:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:06 crc kubenswrapper[4833]: I0217 13:46:06.868431 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:06 crc kubenswrapper[4833]: I0217 13:46:06.868506 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:06 crc kubenswrapper[4833]: I0217 13:46:06.868530 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:06 crc kubenswrapper[4833]: I0217 13:46:06.868558 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:06 crc kubenswrapper[4833]: I0217 13:46:06.868582 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:06Z","lastTransitionTime":"2026-02-17T13:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:06 crc kubenswrapper[4833]: I0217 13:46:06.971545 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:06 crc kubenswrapper[4833]: I0217 13:46:06.971592 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:06 crc kubenswrapper[4833]: I0217 13:46:06.971608 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:06 crc kubenswrapper[4833]: I0217 13:46:06.971632 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:06 crc kubenswrapper[4833]: I0217 13:46:06.971650 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:06Z","lastTransitionTime":"2026-02-17T13:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:07 crc kubenswrapper[4833]: I0217 13:46:07.019634 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 06:06:24.773866538 +0000 UTC Feb 17 13:46:07 crc kubenswrapper[4833]: I0217 13:46:07.041203 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:46:07 crc kubenswrapper[4833]: E0217 13:46:07.041387 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:46:07 crc kubenswrapper[4833]: I0217 13:46:07.070614 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:07 crc kubenswrapper[4833]: I0217 13:46:07.070776 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:07 crc kubenswrapper[4833]: I0217 13:46:07.070802 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:07 crc kubenswrapper[4833]: I0217 13:46:07.070836 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:07 crc kubenswrapper[4833]: I0217 13:46:07.070859 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:07Z","lastTransitionTime":"2026-02-17T13:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:07 crc kubenswrapper[4833]: E0217 13:46:07.089978 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"858a93ea-15f0-4bac-8fa3-badb79f68871\\\",\\\"systemUUID\\\":\\\"648b67a2-27e7-447a-8ad2-7acc2e737df4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:07Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:07 crc kubenswrapper[4833]: I0217 13:46:07.095773 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:07 crc kubenswrapper[4833]: I0217 13:46:07.095812 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:07 crc kubenswrapper[4833]: I0217 13:46:07.095824 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:07 crc kubenswrapper[4833]: I0217 13:46:07.095841 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:07 crc kubenswrapper[4833]: I0217 13:46:07.095852 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:07Z","lastTransitionTime":"2026-02-17T13:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:07 crc kubenswrapper[4833]: E0217 13:46:07.117331 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"858a93ea-15f0-4bac-8fa3-badb79f68871\\\",\\\"systemUUID\\\":\\\"648b67a2-27e7-447a-8ad2-7acc2e737df4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:07Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:07 crc kubenswrapper[4833]: I0217 13:46:07.121968 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:07 crc kubenswrapper[4833]: I0217 13:46:07.122018 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:07 crc kubenswrapper[4833]: I0217 13:46:07.122032 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:07 crc kubenswrapper[4833]: I0217 13:46:07.122085 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:07 crc kubenswrapper[4833]: I0217 13:46:07.122095 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:07Z","lastTransitionTime":"2026-02-17T13:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:07 crc kubenswrapper[4833]: E0217 13:46:07.137524 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"858a93ea-15f0-4bac-8fa3-badb79f68871\\\",\\\"systemUUID\\\":\\\"648b67a2-27e7-447a-8ad2-7acc2e737df4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:07Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:07 crc kubenswrapper[4833]: I0217 13:46:07.142508 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:07 crc kubenswrapper[4833]: I0217 13:46:07.142557 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:07 crc kubenswrapper[4833]: I0217 13:46:07.142573 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:07 crc kubenswrapper[4833]: I0217 13:46:07.142594 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:07 crc kubenswrapper[4833]: I0217 13:46:07.142609 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:07Z","lastTransitionTime":"2026-02-17T13:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:07 crc kubenswrapper[4833]: E0217 13:46:07.160257 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"858a93ea-15f0-4bac-8fa3-badb79f68871\\\",\\\"systemUUID\\\":\\\"648b67a2-27e7-447a-8ad2-7acc2e737df4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:07Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:07 crc kubenswrapper[4833]: I0217 13:46:07.164926 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:07 crc kubenswrapper[4833]: I0217 13:46:07.164991 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:07 crc kubenswrapper[4833]: I0217 13:46:07.165009 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:07 crc kubenswrapper[4833]: I0217 13:46:07.165065 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:07 crc kubenswrapper[4833]: I0217 13:46:07.165087 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:07Z","lastTransitionTime":"2026-02-17T13:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:07 crc kubenswrapper[4833]: E0217 13:46:07.183148 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"858a93ea-15f0-4bac-8fa3-badb79f68871\\\",\\\"systemUUID\\\":\\\"648b67a2-27e7-447a-8ad2-7acc2e737df4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:07Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:07 crc kubenswrapper[4833]: E0217 13:46:07.183370 4833 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 13:46:07 crc kubenswrapper[4833]: I0217 13:46:07.185306 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:07 crc kubenswrapper[4833]: I0217 13:46:07.185349 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:07 crc kubenswrapper[4833]: I0217 13:46:07.185367 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:07 crc kubenswrapper[4833]: I0217 13:46:07.185389 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:07 crc kubenswrapper[4833]: I0217 13:46:07.185406 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:07Z","lastTransitionTime":"2026-02-17T13:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:07 crc kubenswrapper[4833]: I0217 13:46:07.288416 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:07 crc kubenswrapper[4833]: I0217 13:46:07.288486 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:07 crc kubenswrapper[4833]: I0217 13:46:07.288505 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:07 crc kubenswrapper[4833]: I0217 13:46:07.288532 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:07 crc kubenswrapper[4833]: I0217 13:46:07.288552 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:07Z","lastTransitionTime":"2026-02-17T13:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:07 crc kubenswrapper[4833]: I0217 13:46:07.390901 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:07 crc kubenswrapper[4833]: I0217 13:46:07.390958 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:07 crc kubenswrapper[4833]: I0217 13:46:07.390976 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:07 crc kubenswrapper[4833]: I0217 13:46:07.391000 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:07 crc kubenswrapper[4833]: I0217 13:46:07.391017 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:07Z","lastTransitionTime":"2026-02-17T13:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:07 crc kubenswrapper[4833]: I0217 13:46:07.493930 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:07 crc kubenswrapper[4833]: I0217 13:46:07.494079 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:07 crc kubenswrapper[4833]: I0217 13:46:07.494122 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:07 crc kubenswrapper[4833]: I0217 13:46:07.494156 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:07 crc kubenswrapper[4833]: I0217 13:46:07.494179 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:07Z","lastTransitionTime":"2026-02-17T13:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:07 crc kubenswrapper[4833]: I0217 13:46:07.597373 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:07 crc kubenswrapper[4833]: I0217 13:46:07.597444 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:07 crc kubenswrapper[4833]: I0217 13:46:07.597478 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:07 crc kubenswrapper[4833]: I0217 13:46:07.597509 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:07 crc kubenswrapper[4833]: I0217 13:46:07.597529 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:07Z","lastTransitionTime":"2026-02-17T13:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:07 crc kubenswrapper[4833]: I0217 13:46:07.700751 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:07 crc kubenswrapper[4833]: I0217 13:46:07.700846 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:07 crc kubenswrapper[4833]: I0217 13:46:07.700869 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:07 crc kubenswrapper[4833]: I0217 13:46:07.700896 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:07 crc kubenswrapper[4833]: I0217 13:46:07.700919 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:07Z","lastTransitionTime":"2026-02-17T13:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:07 crc kubenswrapper[4833]: I0217 13:46:07.804177 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:07 crc kubenswrapper[4833]: I0217 13:46:07.804244 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:07 crc kubenswrapper[4833]: I0217 13:46:07.804271 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:07 crc kubenswrapper[4833]: I0217 13:46:07.804301 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:07 crc kubenswrapper[4833]: I0217 13:46:07.804329 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:07Z","lastTransitionTime":"2026-02-17T13:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:07 crc kubenswrapper[4833]: I0217 13:46:07.906962 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:07 crc kubenswrapper[4833]: I0217 13:46:07.907024 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:07 crc kubenswrapper[4833]: I0217 13:46:07.907080 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:07 crc kubenswrapper[4833]: I0217 13:46:07.907108 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:07 crc kubenswrapper[4833]: I0217 13:46:07.907129 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:07Z","lastTransitionTime":"2026-02-17T13:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:08 crc kubenswrapper[4833]: I0217 13:46:08.009173 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:08 crc kubenswrapper[4833]: I0217 13:46:08.009224 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:08 crc kubenswrapper[4833]: I0217 13:46:08.009236 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:08 crc kubenswrapper[4833]: I0217 13:46:08.009254 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:08 crc kubenswrapper[4833]: I0217 13:46:08.009268 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:08Z","lastTransitionTime":"2026-02-17T13:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:08 crc kubenswrapper[4833]: I0217 13:46:08.020724 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 12:52:39.955885016 +0000 UTC Feb 17 13:46:08 crc kubenswrapper[4833]: I0217 13:46:08.041220 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:46:08 crc kubenswrapper[4833]: I0217 13:46:08.041276 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:46:08 crc kubenswrapper[4833]: E0217 13:46:08.041352 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:46:08 crc kubenswrapper[4833]: I0217 13:46:08.041226 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b7xf" Feb 17 13:46:08 crc kubenswrapper[4833]: E0217 13:46:08.041467 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b7xf" podUID="892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c" Feb 17 13:46:08 crc kubenswrapper[4833]: E0217 13:46:08.041532 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:46:08 crc kubenswrapper[4833]: I0217 13:46:08.111916 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:08 crc kubenswrapper[4833]: I0217 13:46:08.111983 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:08 crc kubenswrapper[4833]: I0217 13:46:08.112003 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:08 crc kubenswrapper[4833]: I0217 13:46:08.112029 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:08 crc kubenswrapper[4833]: I0217 13:46:08.112082 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:08Z","lastTransitionTime":"2026-02-17T13:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:08 crc kubenswrapper[4833]: I0217 13:46:08.214462 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:08 crc kubenswrapper[4833]: I0217 13:46:08.214528 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:08 crc kubenswrapper[4833]: I0217 13:46:08.214549 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:08 crc kubenswrapper[4833]: I0217 13:46:08.214577 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:08 crc kubenswrapper[4833]: I0217 13:46:08.214600 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:08Z","lastTransitionTime":"2026-02-17T13:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:08 crc kubenswrapper[4833]: I0217 13:46:08.316892 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:08 crc kubenswrapper[4833]: I0217 13:46:08.316951 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:08 crc kubenswrapper[4833]: I0217 13:46:08.316968 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:08 crc kubenswrapper[4833]: I0217 13:46:08.316991 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:08 crc kubenswrapper[4833]: I0217 13:46:08.317008 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:08Z","lastTransitionTime":"2026-02-17T13:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:08 crc kubenswrapper[4833]: I0217 13:46:08.419734 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:08 crc kubenswrapper[4833]: I0217 13:46:08.419801 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:08 crc kubenswrapper[4833]: I0217 13:46:08.419818 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:08 crc kubenswrapper[4833]: I0217 13:46:08.419841 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:08 crc kubenswrapper[4833]: I0217 13:46:08.419858 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:08Z","lastTransitionTime":"2026-02-17T13:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:08 crc kubenswrapper[4833]: I0217 13:46:08.523000 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:08 crc kubenswrapper[4833]: I0217 13:46:08.523090 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:08 crc kubenswrapper[4833]: I0217 13:46:08.523103 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:08 crc kubenswrapper[4833]: I0217 13:46:08.523120 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:08 crc kubenswrapper[4833]: I0217 13:46:08.523132 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:08Z","lastTransitionTime":"2026-02-17T13:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:08 crc kubenswrapper[4833]: I0217 13:46:08.626565 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:08 crc kubenswrapper[4833]: I0217 13:46:08.626640 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:08 crc kubenswrapper[4833]: I0217 13:46:08.626660 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:08 crc kubenswrapper[4833]: I0217 13:46:08.626686 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:08 crc kubenswrapper[4833]: I0217 13:46:08.626702 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:08Z","lastTransitionTime":"2026-02-17T13:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:08 crc kubenswrapper[4833]: I0217 13:46:08.728899 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:08 crc kubenswrapper[4833]: I0217 13:46:08.728962 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:08 crc kubenswrapper[4833]: I0217 13:46:08.728985 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:08 crc kubenswrapper[4833]: I0217 13:46:08.729012 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:08 crc kubenswrapper[4833]: I0217 13:46:08.729034 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:08Z","lastTransitionTime":"2026-02-17T13:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:08 crc kubenswrapper[4833]: I0217 13:46:08.832569 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:08 crc kubenswrapper[4833]: I0217 13:46:08.832643 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:08 crc kubenswrapper[4833]: I0217 13:46:08.832681 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:08 crc kubenswrapper[4833]: I0217 13:46:08.832710 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:08 crc kubenswrapper[4833]: I0217 13:46:08.832732 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:08Z","lastTransitionTime":"2026-02-17T13:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:08 crc kubenswrapper[4833]: I0217 13:46:08.938972 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:08 crc kubenswrapper[4833]: I0217 13:46:08.939080 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:08 crc kubenswrapper[4833]: I0217 13:46:08.939112 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:08 crc kubenswrapper[4833]: I0217 13:46:08.939142 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:08 crc kubenswrapper[4833]: I0217 13:46:08.939162 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:08Z","lastTransitionTime":"2026-02-17T13:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:09 crc kubenswrapper[4833]: I0217 13:46:09.021814 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 15:19:59.966037835 +0000 UTC Feb 17 13:46:09 crc kubenswrapper[4833]: I0217 13:46:09.040975 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:46:09 crc kubenswrapper[4833]: E0217 13:46:09.041234 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:46:09 crc kubenswrapper[4833]: I0217 13:46:09.042791 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:09 crc kubenswrapper[4833]: I0217 13:46:09.042864 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:09 crc kubenswrapper[4833]: I0217 13:46:09.042889 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:09 crc kubenswrapper[4833]: I0217 13:46:09.042916 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:09 crc kubenswrapper[4833]: I0217 13:46:09.042938 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:09Z","lastTransitionTime":"2026-02-17T13:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:09 crc kubenswrapper[4833]: I0217 13:46:09.146004 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:09 crc kubenswrapper[4833]: I0217 13:46:09.146126 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:09 crc kubenswrapper[4833]: I0217 13:46:09.146149 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:09 crc kubenswrapper[4833]: I0217 13:46:09.146178 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:09 crc kubenswrapper[4833]: I0217 13:46:09.146199 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:09Z","lastTransitionTime":"2026-02-17T13:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:09 crc kubenswrapper[4833]: I0217 13:46:09.248983 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:09 crc kubenswrapper[4833]: I0217 13:46:09.249098 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:09 crc kubenswrapper[4833]: I0217 13:46:09.249123 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:09 crc kubenswrapper[4833]: I0217 13:46:09.249149 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:09 crc kubenswrapper[4833]: I0217 13:46:09.249167 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:09Z","lastTransitionTime":"2026-02-17T13:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:09 crc kubenswrapper[4833]: I0217 13:46:09.351993 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:09 crc kubenswrapper[4833]: I0217 13:46:09.352087 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:09 crc kubenswrapper[4833]: I0217 13:46:09.352098 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:09 crc kubenswrapper[4833]: I0217 13:46:09.352124 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:09 crc kubenswrapper[4833]: I0217 13:46:09.352140 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:09Z","lastTransitionTime":"2026-02-17T13:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:09 crc kubenswrapper[4833]: I0217 13:46:09.454217 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:09 crc kubenswrapper[4833]: I0217 13:46:09.454259 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:09 crc kubenswrapper[4833]: I0217 13:46:09.454270 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:09 crc kubenswrapper[4833]: I0217 13:46:09.454286 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:09 crc kubenswrapper[4833]: I0217 13:46:09.454296 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:09Z","lastTransitionTime":"2026-02-17T13:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:09 crc kubenswrapper[4833]: I0217 13:46:09.557400 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:09 crc kubenswrapper[4833]: I0217 13:46:09.557453 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:09 crc kubenswrapper[4833]: I0217 13:46:09.557469 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:09 crc kubenswrapper[4833]: I0217 13:46:09.557490 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:09 crc kubenswrapper[4833]: I0217 13:46:09.557507 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:09Z","lastTransitionTime":"2026-02-17T13:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:09 crc kubenswrapper[4833]: I0217 13:46:09.660699 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:09 crc kubenswrapper[4833]: I0217 13:46:09.660774 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:09 crc kubenswrapper[4833]: I0217 13:46:09.660796 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:09 crc kubenswrapper[4833]: I0217 13:46:09.660824 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:09 crc kubenswrapper[4833]: I0217 13:46:09.660847 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:09Z","lastTransitionTime":"2026-02-17T13:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:09 crc kubenswrapper[4833]: I0217 13:46:09.764379 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:09 crc kubenswrapper[4833]: I0217 13:46:09.764452 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:09 crc kubenswrapper[4833]: I0217 13:46:09.764471 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:09 crc kubenswrapper[4833]: I0217 13:46:09.764493 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:09 crc kubenswrapper[4833]: I0217 13:46:09.764510 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:09Z","lastTransitionTime":"2026-02-17T13:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:09 crc kubenswrapper[4833]: I0217 13:46:09.866746 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:09 crc kubenswrapper[4833]: I0217 13:46:09.866798 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:09 crc kubenswrapper[4833]: I0217 13:46:09.866815 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:09 crc kubenswrapper[4833]: I0217 13:46:09.866837 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:09 crc kubenswrapper[4833]: I0217 13:46:09.866853 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:09Z","lastTransitionTime":"2026-02-17T13:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:09 crc kubenswrapper[4833]: I0217 13:46:09.969780 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:09 crc kubenswrapper[4833]: I0217 13:46:09.969833 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:09 crc kubenswrapper[4833]: I0217 13:46:09.969849 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:09 crc kubenswrapper[4833]: I0217 13:46:09.969872 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:09 crc kubenswrapper[4833]: I0217 13:46:09.969889 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:09Z","lastTransitionTime":"2026-02-17T13:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:10 crc kubenswrapper[4833]: I0217 13:46:10.022307 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 22:19:08.360396081 +0000 UTC Feb 17 13:46:10 crc kubenswrapper[4833]: I0217 13:46:10.041181 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:46:10 crc kubenswrapper[4833]: I0217 13:46:10.041345 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b7xf" Feb 17 13:46:10 crc kubenswrapper[4833]: I0217 13:46:10.041385 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:46:10 crc kubenswrapper[4833]: E0217 13:46:10.041557 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:46:10 crc kubenswrapper[4833]: E0217 13:46:10.041680 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b7xf" podUID="892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c" Feb 17 13:46:10 crc kubenswrapper[4833]: E0217 13:46:10.041826 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:46:10 crc kubenswrapper[4833]: I0217 13:46:10.073151 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:10 crc kubenswrapper[4833]: I0217 13:46:10.073222 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:10 crc kubenswrapper[4833]: I0217 13:46:10.073247 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:10 crc kubenswrapper[4833]: I0217 13:46:10.073276 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:10 crc kubenswrapper[4833]: I0217 13:46:10.073298 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:10Z","lastTransitionTime":"2026-02-17T13:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:10 crc kubenswrapper[4833]: I0217 13:46:10.177106 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:10 crc kubenswrapper[4833]: I0217 13:46:10.177192 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:10 crc kubenswrapper[4833]: I0217 13:46:10.177223 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:10 crc kubenswrapper[4833]: I0217 13:46:10.177256 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:10 crc kubenswrapper[4833]: I0217 13:46:10.177284 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:10Z","lastTransitionTime":"2026-02-17T13:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:10 crc kubenswrapper[4833]: I0217 13:46:10.279738 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:10 crc kubenswrapper[4833]: I0217 13:46:10.279864 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:10 crc kubenswrapper[4833]: I0217 13:46:10.279898 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:10 crc kubenswrapper[4833]: I0217 13:46:10.279929 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:10 crc kubenswrapper[4833]: I0217 13:46:10.280092 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:10Z","lastTransitionTime":"2026-02-17T13:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:10 crc kubenswrapper[4833]: I0217 13:46:10.381863 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:10 crc kubenswrapper[4833]: I0217 13:46:10.381923 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:10 crc kubenswrapper[4833]: I0217 13:46:10.381933 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:10 crc kubenswrapper[4833]: I0217 13:46:10.381958 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:10 crc kubenswrapper[4833]: I0217 13:46:10.381975 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:10Z","lastTransitionTime":"2026-02-17T13:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:10 crc kubenswrapper[4833]: I0217 13:46:10.486006 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:10 crc kubenswrapper[4833]: I0217 13:46:10.486084 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:10 crc kubenswrapper[4833]: I0217 13:46:10.486096 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:10 crc kubenswrapper[4833]: I0217 13:46:10.486117 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:10 crc kubenswrapper[4833]: I0217 13:46:10.486129 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:10Z","lastTransitionTime":"2026-02-17T13:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:10 crc kubenswrapper[4833]: I0217 13:46:10.590410 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:10 crc kubenswrapper[4833]: I0217 13:46:10.591255 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:10 crc kubenswrapper[4833]: I0217 13:46:10.591302 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:10 crc kubenswrapper[4833]: I0217 13:46:10.591343 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:10 crc kubenswrapper[4833]: I0217 13:46:10.591364 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:10Z","lastTransitionTime":"2026-02-17T13:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:10 crc kubenswrapper[4833]: I0217 13:46:10.694415 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:10 crc kubenswrapper[4833]: I0217 13:46:10.694882 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:10 crc kubenswrapper[4833]: I0217 13:46:10.694974 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:10 crc kubenswrapper[4833]: I0217 13:46:10.695166 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:10 crc kubenswrapper[4833]: I0217 13:46:10.695260 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:10Z","lastTransitionTime":"2026-02-17T13:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:10 crc kubenswrapper[4833]: I0217 13:46:10.798496 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:10 crc kubenswrapper[4833]: I0217 13:46:10.798544 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:10 crc kubenswrapper[4833]: I0217 13:46:10.798562 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:10 crc kubenswrapper[4833]: I0217 13:46:10.798581 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:10 crc kubenswrapper[4833]: I0217 13:46:10.798595 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:10Z","lastTransitionTime":"2026-02-17T13:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:10 crc kubenswrapper[4833]: I0217 13:46:10.901355 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:10 crc kubenswrapper[4833]: I0217 13:46:10.901398 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:10 crc kubenswrapper[4833]: I0217 13:46:10.901411 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:10 crc kubenswrapper[4833]: I0217 13:46:10.901428 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:10 crc kubenswrapper[4833]: I0217 13:46:10.901440 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:10Z","lastTransitionTime":"2026-02-17T13:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:11 crc kubenswrapper[4833]: I0217 13:46:11.003858 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:11 crc kubenswrapper[4833]: I0217 13:46:11.003962 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:11 crc kubenswrapper[4833]: I0217 13:46:11.003986 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:11 crc kubenswrapper[4833]: I0217 13:46:11.004033 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:11 crc kubenswrapper[4833]: I0217 13:46:11.004094 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:11Z","lastTransitionTime":"2026-02-17T13:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:11 crc kubenswrapper[4833]: I0217 13:46:11.022966 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 16:52:02.211391162 +0000 UTC Feb 17 13:46:11 crc kubenswrapper[4833]: I0217 13:46:11.040558 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:46:11 crc kubenswrapper[4833]: E0217 13:46:11.040748 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:46:11 crc kubenswrapper[4833]: I0217 13:46:11.042653 4833 scope.go:117] "RemoveContainer" containerID="79b340d04500cf506784f0354e892c3ea66b73dcfd1d22b17e464cc29175b9e8" Feb 17 13:46:11 crc kubenswrapper[4833]: E0217 13:46:11.043243 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7r9gt_openshift-ovn-kubernetes(72c5918a-056f-446c-b138-a1be7140a5b0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" podUID="72c5918a-056f-446c-b138-a1be7140a5b0" Feb 17 13:46:11 crc kubenswrapper[4833]: I0217 13:46:11.059925 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4b7xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft275\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft275\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4b7xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:11Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:11 crc kubenswrapper[4833]: I0217 13:46:11.073562 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5gxjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03c9f579-21ad-4977-a5e8-db9272a08557\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7400b87694dd74f41819e23ce3f617021e028cdcbfdad65b2213249f35d176d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s6vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5gxjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:11Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:11 crc kubenswrapper[4833]: I0217 13:46:11.087804 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xwfsh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"962a90ec-217a-4df0-8d83-2e6663953088\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cf73503bb300f6f71385f8da17c04a4b5d84691d5d7ca3b363b2335e4905c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srrv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc181ab232df3bc23da46eabfb43d64f7c4e674152c27338d8ff5faf1c477bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srrv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xwfsh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:11Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:11 crc kubenswrapper[4833]: I0217 13:46:11.101986 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a207ffb6a17f606fde32eec5ae4222e2fdf02d79f61e4f959be55814c14daf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:11Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:11 crc kubenswrapper[4833]: I0217 13:46:11.107197 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:11 crc kubenswrapper[4833]: I0217 13:46:11.107232 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:11 crc kubenswrapper[4833]: I0217 13:46:11.107241 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:11 crc kubenswrapper[4833]: I0217 13:46:11.107256 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:11 crc kubenswrapper[4833]: I0217 13:46:11.107269 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:11Z","lastTransitionTime":"2026-02-17T13:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:11 crc kubenswrapper[4833]: I0217 13:46:11.121199 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b9b39aa9ac4f718b75bfeffba8e09bf2fefca68625037f565f3b237b6275db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:11Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:11 crc kubenswrapper[4833]: I0217 13:46:11.139351 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vx9xx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d13dbe6c-a57b-4011-9987-193ccf4939f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6637ecd7299bd61ba9b47f093048b122c90fc98990eb22155e8467c1dfd75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w7kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vx9xx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:11Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:11 crc kubenswrapper[4833]: I0217 13:46:11.153898 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4a1ca83-1919-4f9c-82de-c849cbd50e70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4ea9f4f4051db850a31f5e1081095664c8764e1033e81823e75cd0072d13a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nq5ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89a1061eccd396579d2995d2d4ee3bd83f65f7e06a951b4d77897e687a7dcb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nq5ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nmzvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:11Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:11 crc kubenswrapper[4833]: I0217 13:46:11.172616 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wxvlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eb74ba2-a87a-415f-8978-8ef706346aa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f08f2aa3f0028f17b13b73c760f1471c3f335f3ae02be305bddd63552ec7330b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2569baf4273e04fbca0b38e0a65c945ba4b2767cba6f2b6647012328c041bab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2569baf4273e04fbca0b38e0a65c945ba4b2767cba6f2b6647012328c041bab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec44770454d862d13715225217505d12ffefd3f5a8dae0912a7ba8a9fbccb0bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec44770454d862d13715225217505d12ffefd3f5a8dae0912a7ba8a9fbccb0bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d28180d5fc08486a1492239ab1107c9b3856334091da9d2ad7ae0e6086fbd00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d28180d5fc08486a1492239ab1107c9b3856334091da9d2ad7ae0e6086fbd00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79641e6aa81b861f7dd3dadd0cc23f677727909d85f017c761bcbbb19450b5e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79641e6aa81b861f7dd3dadd0cc23f677727909d85f017c761bcbbb19450b5e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f4ea02b7d74057eddb0946756236afdb129b5ad326b153d66357ee42caca78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f4ea02b7d74057eddb0946756236afdb129b5ad326b153d66357ee42caca78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcd939d8a8deab6a94b6df237bd627850f5bb37bd29bd0988d16452dcbd87c9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcd939d8a8deab6a94b6df237bd627850f5bb37bd29bd0988d16452dcbd87c9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wxvlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:11Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:11 crc kubenswrapper[4833]: I0217 13:46:11.189952 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c1afc7-6071-464b-aedc-11234d869afe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e36c4174a69c22c608db515a3fe558ec0292c53ae56a43a945c8bb19bf4cdf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bb5bc7e6f182dd6271a6d16d8441e1f6833a32fd03858a3b91d16a896339c2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e477bf1f25d554946b3657a5bd18f922b18483902fd4f9c05a452a299cb4d920\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2260fe18b8f9f829f7dde85c9bdfc32895c8bfe48e0986c8f8205708e078ff37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:11Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:11 crc kubenswrapper[4833]: I0217 13:46:11.208226 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:11Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:11 crc kubenswrapper[4833]: I0217 13:46:11.210022 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:11 crc kubenswrapper[4833]: I0217 13:46:11.210180 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:11 crc kubenswrapper[4833]: I0217 13:46:11.210205 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:11 crc kubenswrapper[4833]: I0217 13:46:11.210243 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:11 crc kubenswrapper[4833]: I0217 13:46:11.210268 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:11Z","lastTransitionTime":"2026-02-17T13:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:11 crc kubenswrapper[4833]: I0217 13:46:11.223821 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:11Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:11 crc kubenswrapper[4833]: I0217 13:46:11.240429 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e5a2da13ff704ed5e89100bd3a88a6349d446c2af48f88acb399efb43dc42d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac53462f7a94ff04dc94b5065169da7ec43985a5a7e7dba779f23614b61b78fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:11Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:11 crc kubenswrapper[4833]: I0217 13:46:11.258174 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5991f03-6fb7-4425-b041-26d760e380ae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae2cc390253214c6979cba64584e59e0342c1f750b16c569acfd885cb6b36c31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9702bc05160617e0c8ac8fd3d9a81244b5be2bf955ef58f9408fc0b42bea6609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5529e4fe84d20c564ff55b47e32df67ca6aac40d2629a6b5bf96e03a64b79676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e89d83530ec38b869e22ff9125ce8e680c2a274e471aedd447c9daee233acb1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e89d83530ec38b869e22ff9125ce8e680c2a274e471aedd447c9daee233acb1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:11Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:11 crc kubenswrapper[4833]: I0217 13:46:11.276015 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:11Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:11 crc kubenswrapper[4833]: I0217 13:46:11.293226 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wlt4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d2bf39d36b6e8998e442f36f84198a363d7f22f56a05957a4a242459e6ff8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9dxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wlt4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:11Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:11 crc kubenswrapper[4833]: I0217 13:46:11.313313 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:11 crc kubenswrapper[4833]: I0217 13:46:11.313393 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:11 crc kubenswrapper[4833]: I0217 13:46:11.313419 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:11 crc kubenswrapper[4833]: I0217 13:46:11.313456 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:11 crc kubenswrapper[4833]: I0217 13:46:11.313481 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:11Z","lastTransitionTime":"2026-02-17T13:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:11 crc kubenswrapper[4833]: I0217 13:46:11.316209 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72c5918a-056f-446c-b138-a1be7140a5b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0c4e266b444d8ac1a67f9b5a6705f4b871e3930daf7707cd30b589a4cce24ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a89a54d9bedad8afe45f34ce0bc2324d0ca8590c328aad9d1557b2d92e5b81fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5c74f469d2d07cfd265ee1f625b957efc0f9d1f0efd5f483c962abdfd66a080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ee6e033fa9b29f59b186e39ec83d162b40e36aab28bb1e568a54c351a8c0c79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9da7ba3cbbca328926f2372a7e581793373ec876c961f7988617c668d958b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42e357a961b833cb840c8b33ce0db65d57cc5fb37789b00a6bcd3eb67d8e33f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79b340d04500cf506784f0354e892c3ea66b73dcfd1d22b17e464cc29175b9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79b340d04500cf506784f0354e892c3ea66b73dcfd1d22b17e464cc29175b9e8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T13:45:57Z\\\",\\\"message\\\":\\\"*v1.Pod openshift-etcd/etcd-crc\\\\nI0217 13:45:57.058971 6481 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"54fbe873-7e6d-475f-a0ad-8dd5f06d850d\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/cluster-autoscaler-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/cluster-autoscaler-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.245\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.245\\\\\\\", Port:9192, Template:(*services.Template)(nil\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7r9gt_openshift-ovn-kubernetes(72c5918a-056f-446c-b138-a1be7140a5b0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fdc1a29a9f5b3be51beb5d28d7a09f671e4f9effeb9ed122da196da8623abe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7r9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:11Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:11 crc kubenswrapper[4833]: I0217 13:46:11.333770 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23fd4be9-debc-405d-ac05-5a5160593231\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f18bb5a144ef41de921b7b92ee560b76e3f9f9a626be4d5638db46e4688d256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84373535efe637bbd5cf8649523941f0e73ad51f06dd45c913741e2a8e7b775f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090ad37f43c1a891b1a2922b2a367e1b52ce56fd6a53e9e749f819bf281136a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://436df1e343e5ca9347dd98b9da03c74816b0bc0ec6cecd94dc3b21c38eea9b51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babf66dd8ad93003e9766e885c7626d4f6dab5bc46a6d714449a01065d7afea9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"message\\\":\\\"W0217 13:45:14.176886 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 13:45:14.177326 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771335914 cert, and key in /tmp/serving-cert-2529326345/serving-signer.crt, /tmp/serving-cert-2529326345/serving-signer.key\\\\nI0217 13:45:14.372413 1 observer_polling.go:159] Starting file observer\\\\nW0217 13:45:14.378661 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 13:45:14.378888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:45:14.382093 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529326345/tls.crt::/tmp/serving-cert-2529326345/tls.key\\\\\\\"\\\\nF0217 13:45:14.604297 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0450fd6122e12877e07084f693f67e63ac70bcecc962f7da5a32241ba14553\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:11Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:11 crc kubenswrapper[4833]: I0217 13:46:11.361448 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e08f74a-75d7-4dce-9297-27140e91962a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a2013b70df34194f895ea7c2e5f2650966fa2d06518733fa4411893e5b2b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbc961a623ca1f4237f238856a41acfb724aadd5fa3feda70410c352b6f750c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80bdd44d3b326c9bb61f21b221c63d4647e2cc6edc95fe568b90e76f4329cf1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9bc54d1057546b9c8d7d1527bb2ae93c1c713b4d6e5879477157af90fc0d89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e88eaef317b2beb0c55811634e6752439b7c7725072f55a42f5ab49c2ac2385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:11Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:11 crc kubenswrapper[4833]: I0217 13:46:11.416264 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:11 crc kubenswrapper[4833]: I0217 13:46:11.416363 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:11 crc kubenswrapper[4833]: I0217 13:46:11.416394 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:11 crc kubenswrapper[4833]: I0217 13:46:11.416425 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:11 crc kubenswrapper[4833]: I0217 13:46:11.416449 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:11Z","lastTransitionTime":"2026-02-17T13:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:11 crc kubenswrapper[4833]: I0217 13:46:11.519574 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:11 crc kubenswrapper[4833]: I0217 13:46:11.519638 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:11 crc kubenswrapper[4833]: I0217 13:46:11.519653 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:11 crc kubenswrapper[4833]: I0217 13:46:11.519673 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:11 crc kubenswrapper[4833]: I0217 13:46:11.519685 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:11Z","lastTransitionTime":"2026-02-17T13:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:11 crc kubenswrapper[4833]: I0217 13:46:11.622712 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:11 crc kubenswrapper[4833]: I0217 13:46:11.622776 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:11 crc kubenswrapper[4833]: I0217 13:46:11.622800 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:11 crc kubenswrapper[4833]: I0217 13:46:11.622829 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:11 crc kubenswrapper[4833]: I0217 13:46:11.622851 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:11Z","lastTransitionTime":"2026-02-17T13:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:11 crc kubenswrapper[4833]: I0217 13:46:11.725806 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:11 crc kubenswrapper[4833]: I0217 13:46:11.725866 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:11 crc kubenswrapper[4833]: I0217 13:46:11.725885 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:11 crc kubenswrapper[4833]: I0217 13:46:11.725907 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:11 crc kubenswrapper[4833]: I0217 13:46:11.725925 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:11Z","lastTransitionTime":"2026-02-17T13:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:11 crc kubenswrapper[4833]: I0217 13:46:11.829459 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:11 crc kubenswrapper[4833]: I0217 13:46:11.829530 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:11 crc kubenswrapper[4833]: I0217 13:46:11.829554 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:11 crc kubenswrapper[4833]: I0217 13:46:11.829586 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:11 crc kubenswrapper[4833]: I0217 13:46:11.829608 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:11Z","lastTransitionTime":"2026-02-17T13:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:11 crc kubenswrapper[4833]: I0217 13:46:11.932941 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:11 crc kubenswrapper[4833]: I0217 13:46:11.933094 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:11 crc kubenswrapper[4833]: I0217 13:46:11.933125 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:11 crc kubenswrapper[4833]: I0217 13:46:11.933153 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:11 crc kubenswrapper[4833]: I0217 13:46:11.933174 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:11Z","lastTransitionTime":"2026-02-17T13:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:12 crc kubenswrapper[4833]: I0217 13:46:12.023615 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 01:52:50.217533135 +0000 UTC Feb 17 13:46:12 crc kubenswrapper[4833]: I0217 13:46:12.036158 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:12 crc kubenswrapper[4833]: I0217 13:46:12.036215 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:12 crc kubenswrapper[4833]: I0217 13:46:12.036232 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:12 crc kubenswrapper[4833]: I0217 13:46:12.036256 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:12 crc kubenswrapper[4833]: I0217 13:46:12.036274 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:12Z","lastTransitionTime":"2026-02-17T13:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:12 crc kubenswrapper[4833]: I0217 13:46:12.040739 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:46:12 crc kubenswrapper[4833]: I0217 13:46:12.040772 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:46:12 crc kubenswrapper[4833]: E0217 13:46:12.040893 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:46:12 crc kubenswrapper[4833]: I0217 13:46:12.040935 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b7xf" Feb 17 13:46:12 crc kubenswrapper[4833]: E0217 13:46:12.041220 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:46:12 crc kubenswrapper[4833]: E0217 13:46:12.041373 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b7xf" podUID="892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c" Feb 17 13:46:12 crc kubenswrapper[4833]: I0217 13:46:12.140126 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:12 crc kubenswrapper[4833]: I0217 13:46:12.140206 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:12 crc kubenswrapper[4833]: I0217 13:46:12.140237 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:12 crc kubenswrapper[4833]: I0217 13:46:12.140265 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:12 crc kubenswrapper[4833]: I0217 13:46:12.140284 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:12Z","lastTransitionTime":"2026-02-17T13:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:12 crc kubenswrapper[4833]: I0217 13:46:12.243159 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:12 crc kubenswrapper[4833]: I0217 13:46:12.243219 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:12 crc kubenswrapper[4833]: I0217 13:46:12.243240 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:12 crc kubenswrapper[4833]: I0217 13:46:12.243265 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:12 crc kubenswrapper[4833]: I0217 13:46:12.243282 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:12Z","lastTransitionTime":"2026-02-17T13:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:12 crc kubenswrapper[4833]: I0217 13:46:12.346306 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:12 crc kubenswrapper[4833]: I0217 13:46:12.346372 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:12 crc kubenswrapper[4833]: I0217 13:46:12.346385 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:12 crc kubenswrapper[4833]: I0217 13:46:12.346402 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:12 crc kubenswrapper[4833]: I0217 13:46:12.346415 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:12Z","lastTransitionTime":"2026-02-17T13:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:12 crc kubenswrapper[4833]: I0217 13:46:12.495254 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:12 crc kubenswrapper[4833]: I0217 13:46:12.495330 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:12 crc kubenswrapper[4833]: I0217 13:46:12.495351 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:12 crc kubenswrapper[4833]: I0217 13:46:12.495380 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:12 crc kubenswrapper[4833]: I0217 13:46:12.495404 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:12Z","lastTransitionTime":"2026-02-17T13:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:12 crc kubenswrapper[4833]: I0217 13:46:12.598271 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:12 crc kubenswrapper[4833]: I0217 13:46:12.598355 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:12 crc kubenswrapper[4833]: I0217 13:46:12.598376 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:12 crc kubenswrapper[4833]: I0217 13:46:12.598403 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:12 crc kubenswrapper[4833]: I0217 13:46:12.598422 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:12Z","lastTransitionTime":"2026-02-17T13:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:12 crc kubenswrapper[4833]: I0217 13:46:12.701945 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:12 crc kubenswrapper[4833]: I0217 13:46:12.702018 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:12 crc kubenswrapper[4833]: I0217 13:46:12.702069 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:12 crc kubenswrapper[4833]: I0217 13:46:12.702093 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:12 crc kubenswrapper[4833]: I0217 13:46:12.702110 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:12Z","lastTransitionTime":"2026-02-17T13:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:12 crc kubenswrapper[4833]: I0217 13:46:12.805795 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:12 crc kubenswrapper[4833]: I0217 13:46:12.805876 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:12 crc kubenswrapper[4833]: I0217 13:46:12.805901 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:12 crc kubenswrapper[4833]: I0217 13:46:12.805931 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:12 crc kubenswrapper[4833]: I0217 13:46:12.805955 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:12Z","lastTransitionTime":"2026-02-17T13:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:12 crc kubenswrapper[4833]: I0217 13:46:12.909856 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:12 crc kubenswrapper[4833]: I0217 13:46:12.909931 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:12 crc kubenswrapper[4833]: I0217 13:46:12.909959 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:12 crc kubenswrapper[4833]: I0217 13:46:12.909990 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:12 crc kubenswrapper[4833]: I0217 13:46:12.910011 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:12Z","lastTransitionTime":"2026-02-17T13:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:13 crc kubenswrapper[4833]: I0217 13:46:13.013267 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:13 crc kubenswrapper[4833]: I0217 13:46:13.013314 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:13 crc kubenswrapper[4833]: I0217 13:46:13.013325 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:13 crc kubenswrapper[4833]: I0217 13:46:13.013342 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:13 crc kubenswrapper[4833]: I0217 13:46:13.013354 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:13Z","lastTransitionTime":"2026-02-17T13:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:13 crc kubenswrapper[4833]: I0217 13:46:13.024581 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 00:26:46.959200891 +0000 UTC Feb 17 13:46:13 crc kubenswrapper[4833]: I0217 13:46:13.041562 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:46:13 crc kubenswrapper[4833]: E0217 13:46:13.041718 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:46:13 crc kubenswrapper[4833]: I0217 13:46:13.116391 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:13 crc kubenswrapper[4833]: I0217 13:46:13.116440 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:13 crc kubenswrapper[4833]: I0217 13:46:13.116451 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:13 crc kubenswrapper[4833]: I0217 13:46:13.116468 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:13 crc kubenswrapper[4833]: I0217 13:46:13.116482 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:13Z","lastTransitionTime":"2026-02-17T13:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:13 crc kubenswrapper[4833]: I0217 13:46:13.220019 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:13 crc kubenswrapper[4833]: I0217 13:46:13.220114 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:13 crc kubenswrapper[4833]: I0217 13:46:13.220135 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:13 crc kubenswrapper[4833]: I0217 13:46:13.220179 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:13 crc kubenswrapper[4833]: I0217 13:46:13.220198 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:13Z","lastTransitionTime":"2026-02-17T13:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:13 crc kubenswrapper[4833]: I0217 13:46:13.323797 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:13 crc kubenswrapper[4833]: I0217 13:46:13.323842 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:13 crc kubenswrapper[4833]: I0217 13:46:13.323852 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:13 crc kubenswrapper[4833]: I0217 13:46:13.323868 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:13 crc kubenswrapper[4833]: I0217 13:46:13.323878 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:13Z","lastTransitionTime":"2026-02-17T13:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:13 crc kubenswrapper[4833]: I0217 13:46:13.426404 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:13 crc kubenswrapper[4833]: I0217 13:46:13.426457 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:13 crc kubenswrapper[4833]: I0217 13:46:13.426469 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:13 crc kubenswrapper[4833]: I0217 13:46:13.426486 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:13 crc kubenswrapper[4833]: I0217 13:46:13.426500 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:13Z","lastTransitionTime":"2026-02-17T13:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:13 crc kubenswrapper[4833]: I0217 13:46:13.528918 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:13 crc kubenswrapper[4833]: I0217 13:46:13.529080 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:13 crc kubenswrapper[4833]: I0217 13:46:13.529100 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:13 crc kubenswrapper[4833]: I0217 13:46:13.529124 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:13 crc kubenswrapper[4833]: I0217 13:46:13.529141 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:13Z","lastTransitionTime":"2026-02-17T13:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:13 crc kubenswrapper[4833]: I0217 13:46:13.632168 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:13 crc kubenswrapper[4833]: I0217 13:46:13.632232 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:13 crc kubenswrapper[4833]: I0217 13:46:13.632250 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:13 crc kubenswrapper[4833]: I0217 13:46:13.632275 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:13 crc kubenswrapper[4833]: I0217 13:46:13.632294 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:13Z","lastTransitionTime":"2026-02-17T13:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:13 crc kubenswrapper[4833]: I0217 13:46:13.735897 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:13 crc kubenswrapper[4833]: I0217 13:46:13.735990 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:13 crc kubenswrapper[4833]: I0217 13:46:13.736028 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:13 crc kubenswrapper[4833]: I0217 13:46:13.736114 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:13 crc kubenswrapper[4833]: I0217 13:46:13.736140 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:13Z","lastTransitionTime":"2026-02-17T13:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:13 crc kubenswrapper[4833]: I0217 13:46:13.839333 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:13 crc kubenswrapper[4833]: I0217 13:46:13.839382 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:13 crc kubenswrapper[4833]: I0217 13:46:13.839391 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:13 crc kubenswrapper[4833]: I0217 13:46:13.839406 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:13 crc kubenswrapper[4833]: I0217 13:46:13.839415 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:13Z","lastTransitionTime":"2026-02-17T13:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:13 crc kubenswrapper[4833]: I0217 13:46:13.941498 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:13 crc kubenswrapper[4833]: I0217 13:46:13.941574 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:13 crc kubenswrapper[4833]: I0217 13:46:13.941600 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:13 crc kubenswrapper[4833]: I0217 13:46:13.941631 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:13 crc kubenswrapper[4833]: I0217 13:46:13.941654 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:13Z","lastTransitionTime":"2026-02-17T13:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:14 crc kubenswrapper[4833]: I0217 13:46:14.024988 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 22:49:14.222299919 +0000 UTC Feb 17 13:46:14 crc kubenswrapper[4833]: I0217 13:46:14.041350 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:46:14 crc kubenswrapper[4833]: I0217 13:46:14.041387 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b7xf" Feb 17 13:46:14 crc kubenswrapper[4833]: I0217 13:46:14.041384 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:46:14 crc kubenswrapper[4833]: E0217 13:46:14.041720 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b7xf" podUID="892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c" Feb 17 13:46:14 crc kubenswrapper[4833]: E0217 13:46:14.041521 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:46:14 crc kubenswrapper[4833]: E0217 13:46:14.041778 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:46:14 crc kubenswrapper[4833]: I0217 13:46:14.043561 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:14 crc kubenswrapper[4833]: I0217 13:46:14.043611 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:14 crc kubenswrapper[4833]: I0217 13:46:14.044374 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:14 crc kubenswrapper[4833]: I0217 13:46:14.044407 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:14 crc kubenswrapper[4833]: I0217 13:46:14.044600 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:14Z","lastTransitionTime":"2026-02-17T13:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:14 crc kubenswrapper[4833]: I0217 13:46:14.147405 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:14 crc kubenswrapper[4833]: I0217 13:46:14.147473 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:14 crc kubenswrapper[4833]: I0217 13:46:14.147497 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:14 crc kubenswrapper[4833]: I0217 13:46:14.147518 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:14 crc kubenswrapper[4833]: I0217 13:46:14.147529 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:14Z","lastTransitionTime":"2026-02-17T13:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:14 crc kubenswrapper[4833]: I0217 13:46:14.250020 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:14 crc kubenswrapper[4833]: I0217 13:46:14.250203 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:14 crc kubenswrapper[4833]: I0217 13:46:14.250237 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:14 crc kubenswrapper[4833]: I0217 13:46:14.250265 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:14 crc kubenswrapper[4833]: I0217 13:46:14.250290 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:14Z","lastTransitionTime":"2026-02-17T13:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:14 crc kubenswrapper[4833]: I0217 13:46:14.352909 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:14 crc kubenswrapper[4833]: I0217 13:46:14.352949 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:14 crc kubenswrapper[4833]: I0217 13:46:14.352960 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:14 crc kubenswrapper[4833]: I0217 13:46:14.352977 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:14 crc kubenswrapper[4833]: I0217 13:46:14.352989 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:14Z","lastTransitionTime":"2026-02-17T13:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:14 crc kubenswrapper[4833]: I0217 13:46:14.455540 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:14 crc kubenswrapper[4833]: I0217 13:46:14.455592 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:14 crc kubenswrapper[4833]: I0217 13:46:14.455633 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:14 crc kubenswrapper[4833]: I0217 13:46:14.455665 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:14 crc kubenswrapper[4833]: I0217 13:46:14.455688 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:14Z","lastTransitionTime":"2026-02-17T13:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:14 crc kubenswrapper[4833]: I0217 13:46:14.558552 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:14 crc kubenswrapper[4833]: I0217 13:46:14.558637 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:14 crc kubenswrapper[4833]: I0217 13:46:14.558658 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:14 crc kubenswrapper[4833]: I0217 13:46:14.558740 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:14 crc kubenswrapper[4833]: I0217 13:46:14.558796 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:14Z","lastTransitionTime":"2026-02-17T13:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:14 crc kubenswrapper[4833]: I0217 13:46:14.662082 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:14 crc kubenswrapper[4833]: I0217 13:46:14.662129 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:14 crc kubenswrapper[4833]: I0217 13:46:14.662140 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:14 crc kubenswrapper[4833]: I0217 13:46:14.662156 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:14 crc kubenswrapper[4833]: I0217 13:46:14.662168 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:14Z","lastTransitionTime":"2026-02-17T13:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:14 crc kubenswrapper[4833]: I0217 13:46:14.765221 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:14 crc kubenswrapper[4833]: I0217 13:46:14.765276 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:14 crc kubenswrapper[4833]: I0217 13:46:14.765293 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:14 crc kubenswrapper[4833]: I0217 13:46:14.765400 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:14 crc kubenswrapper[4833]: I0217 13:46:14.765424 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:14Z","lastTransitionTime":"2026-02-17T13:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:14 crc kubenswrapper[4833]: I0217 13:46:14.868341 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:14 crc kubenswrapper[4833]: I0217 13:46:14.868372 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:14 crc kubenswrapper[4833]: I0217 13:46:14.868381 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:14 crc kubenswrapper[4833]: I0217 13:46:14.868394 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:14 crc kubenswrapper[4833]: I0217 13:46:14.868402 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:14Z","lastTransitionTime":"2026-02-17T13:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:14 crc kubenswrapper[4833]: I0217 13:46:14.971556 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:14 crc kubenswrapper[4833]: I0217 13:46:14.971598 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:14 crc kubenswrapper[4833]: I0217 13:46:14.971611 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:14 crc kubenswrapper[4833]: I0217 13:46:14.971628 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:14 crc kubenswrapper[4833]: I0217 13:46:14.971641 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:14Z","lastTransitionTime":"2026-02-17T13:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:15 crc kubenswrapper[4833]: I0217 13:46:15.025977 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 11:10:59.727082782 +0000 UTC Feb 17 13:46:15 crc kubenswrapper[4833]: I0217 13:46:15.041497 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:46:15 crc kubenswrapper[4833]: E0217 13:46:15.041700 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:46:15 crc kubenswrapper[4833]: I0217 13:46:15.074712 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:15 crc kubenswrapper[4833]: I0217 13:46:15.074738 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:15 crc kubenswrapper[4833]: I0217 13:46:15.074747 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:15 crc kubenswrapper[4833]: I0217 13:46:15.074761 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:15 crc kubenswrapper[4833]: I0217 13:46:15.074770 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:15Z","lastTransitionTime":"2026-02-17T13:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:15 crc kubenswrapper[4833]: I0217 13:46:15.177469 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:15 crc kubenswrapper[4833]: I0217 13:46:15.177517 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:15 crc kubenswrapper[4833]: I0217 13:46:15.177529 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:15 crc kubenswrapper[4833]: I0217 13:46:15.177549 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:15 crc kubenswrapper[4833]: I0217 13:46:15.177574 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:15Z","lastTransitionTime":"2026-02-17T13:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:15 crc kubenswrapper[4833]: I0217 13:46:15.282277 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:15 crc kubenswrapper[4833]: I0217 13:46:15.282335 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:15 crc kubenswrapper[4833]: I0217 13:46:15.282349 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:15 crc kubenswrapper[4833]: I0217 13:46:15.282371 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:15 crc kubenswrapper[4833]: I0217 13:46:15.282393 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:15Z","lastTransitionTime":"2026-02-17T13:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:15 crc kubenswrapper[4833]: I0217 13:46:15.385756 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:15 crc kubenswrapper[4833]: I0217 13:46:15.385813 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:15 crc kubenswrapper[4833]: I0217 13:46:15.385832 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:15 crc kubenswrapper[4833]: I0217 13:46:15.385856 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:15 crc kubenswrapper[4833]: I0217 13:46:15.385873 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:15Z","lastTransitionTime":"2026-02-17T13:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:15 crc kubenswrapper[4833]: I0217 13:46:15.488253 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:15 crc kubenswrapper[4833]: I0217 13:46:15.488291 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:15 crc kubenswrapper[4833]: I0217 13:46:15.488302 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:15 crc kubenswrapper[4833]: I0217 13:46:15.488321 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:15 crc kubenswrapper[4833]: I0217 13:46:15.488333 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:15Z","lastTransitionTime":"2026-02-17T13:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:15 crc kubenswrapper[4833]: I0217 13:46:15.591672 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:15 crc kubenswrapper[4833]: I0217 13:46:15.591719 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:15 crc kubenswrapper[4833]: I0217 13:46:15.591734 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:15 crc kubenswrapper[4833]: I0217 13:46:15.591754 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:15 crc kubenswrapper[4833]: I0217 13:46:15.591768 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:15Z","lastTransitionTime":"2026-02-17T13:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:15 crc kubenswrapper[4833]: I0217 13:46:15.694894 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:15 crc kubenswrapper[4833]: I0217 13:46:15.694957 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:15 crc kubenswrapper[4833]: I0217 13:46:15.694977 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:15 crc kubenswrapper[4833]: I0217 13:46:15.695002 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:15 crc kubenswrapper[4833]: I0217 13:46:15.695021 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:15Z","lastTransitionTime":"2026-02-17T13:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:15 crc kubenswrapper[4833]: I0217 13:46:15.798474 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:15 crc kubenswrapper[4833]: I0217 13:46:15.798556 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:15 crc kubenswrapper[4833]: I0217 13:46:15.798579 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:15 crc kubenswrapper[4833]: I0217 13:46:15.798609 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:15 crc kubenswrapper[4833]: I0217 13:46:15.798631 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:15Z","lastTransitionTime":"2026-02-17T13:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:15 crc kubenswrapper[4833]: I0217 13:46:15.901704 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:15 crc kubenswrapper[4833]: I0217 13:46:15.901763 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:15 crc kubenswrapper[4833]: I0217 13:46:15.901780 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:15 crc kubenswrapper[4833]: I0217 13:46:15.901804 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:15 crc kubenswrapper[4833]: I0217 13:46:15.901822 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:15Z","lastTransitionTime":"2026-02-17T13:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:16 crc kubenswrapper[4833]: I0217 13:46:16.004296 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:16 crc kubenswrapper[4833]: I0217 13:46:16.004336 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:16 crc kubenswrapper[4833]: I0217 13:46:16.004356 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:16 crc kubenswrapper[4833]: I0217 13:46:16.004374 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:16 crc kubenswrapper[4833]: I0217 13:46:16.004388 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:16Z","lastTransitionTime":"2026-02-17T13:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:16 crc kubenswrapper[4833]: I0217 13:46:16.026949 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 13:33:43.993220334 +0000 UTC Feb 17 13:46:16 crc kubenswrapper[4833]: I0217 13:46:16.041393 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:46:16 crc kubenswrapper[4833]: I0217 13:46:16.041535 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b7xf" Feb 17 13:46:16 crc kubenswrapper[4833]: E0217 13:46:16.041638 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:46:16 crc kubenswrapper[4833]: I0217 13:46:16.041747 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:46:16 crc kubenswrapper[4833]: E0217 13:46:16.041922 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b7xf" podUID="892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c" Feb 17 13:46:16 crc kubenswrapper[4833]: E0217 13:46:16.042092 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:46:16 crc kubenswrapper[4833]: I0217 13:46:16.106925 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:16 crc kubenswrapper[4833]: I0217 13:46:16.106968 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:16 crc kubenswrapper[4833]: I0217 13:46:16.106979 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:16 crc kubenswrapper[4833]: I0217 13:46:16.106994 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:16 crc kubenswrapper[4833]: I0217 13:46:16.107007 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:16Z","lastTransitionTime":"2026-02-17T13:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:16 crc kubenswrapper[4833]: I0217 13:46:16.209391 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:16 crc kubenswrapper[4833]: I0217 13:46:16.209471 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:16 crc kubenswrapper[4833]: I0217 13:46:16.209494 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:16 crc kubenswrapper[4833]: I0217 13:46:16.209524 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:16 crc kubenswrapper[4833]: I0217 13:46:16.209546 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:16Z","lastTransitionTime":"2026-02-17T13:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:16 crc kubenswrapper[4833]: I0217 13:46:16.312903 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:16 crc kubenswrapper[4833]: I0217 13:46:16.312947 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:16 crc kubenswrapper[4833]: I0217 13:46:16.312965 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:16 crc kubenswrapper[4833]: I0217 13:46:16.312989 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:16 crc kubenswrapper[4833]: I0217 13:46:16.313006 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:16Z","lastTransitionTime":"2026-02-17T13:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:16 crc kubenswrapper[4833]: I0217 13:46:16.414629 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:16 crc kubenswrapper[4833]: I0217 13:46:16.414658 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:16 crc kubenswrapper[4833]: I0217 13:46:16.414669 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:16 crc kubenswrapper[4833]: I0217 13:46:16.414683 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:16 crc kubenswrapper[4833]: I0217 13:46:16.414694 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:16Z","lastTransitionTime":"2026-02-17T13:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:16 crc kubenswrapper[4833]: I0217 13:46:16.516732 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:16 crc kubenswrapper[4833]: I0217 13:46:16.516780 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:16 crc kubenswrapper[4833]: I0217 13:46:16.516797 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:16 crc kubenswrapper[4833]: I0217 13:46:16.516822 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:16 crc kubenswrapper[4833]: I0217 13:46:16.516838 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:16Z","lastTransitionTime":"2026-02-17T13:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:16 crc kubenswrapper[4833]: I0217 13:46:16.570437 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c-metrics-certs\") pod \"network-metrics-daemon-4b7xf\" (UID: \"892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c\") " pod="openshift-multus/network-metrics-daemon-4b7xf" Feb 17 13:46:16 crc kubenswrapper[4833]: E0217 13:46:16.570638 4833 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 13:46:16 crc kubenswrapper[4833]: E0217 13:46:16.570706 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c-metrics-certs podName:892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c nodeName:}" failed. No retries permitted until 2026-02-17 13:46:48.570684486 +0000 UTC m=+98.205783949 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c-metrics-certs") pod "network-metrics-daemon-4b7xf" (UID: "892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 13:46:16 crc kubenswrapper[4833]: I0217 13:46:16.619337 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:16 crc kubenswrapper[4833]: I0217 13:46:16.619367 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:16 crc kubenswrapper[4833]: I0217 13:46:16.619376 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:16 crc kubenswrapper[4833]: I0217 13:46:16.619387 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:16 crc kubenswrapper[4833]: I0217 13:46:16.619398 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:16Z","lastTransitionTime":"2026-02-17T13:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:16 crc kubenswrapper[4833]: I0217 13:46:16.721476 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:16 crc kubenswrapper[4833]: I0217 13:46:16.721505 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:16 crc kubenswrapper[4833]: I0217 13:46:16.721515 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:16 crc kubenswrapper[4833]: I0217 13:46:16.721527 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:16 crc kubenswrapper[4833]: I0217 13:46:16.721535 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:16Z","lastTransitionTime":"2026-02-17T13:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:16 crc kubenswrapper[4833]: I0217 13:46:16.823312 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:16 crc kubenswrapper[4833]: I0217 13:46:16.823344 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:16 crc kubenswrapper[4833]: I0217 13:46:16.823352 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:16 crc kubenswrapper[4833]: I0217 13:46:16.823363 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:16 crc kubenswrapper[4833]: I0217 13:46:16.823372 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:16Z","lastTransitionTime":"2026-02-17T13:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:16 crc kubenswrapper[4833]: I0217 13:46:16.925188 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:16 crc kubenswrapper[4833]: I0217 13:46:16.925231 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:16 crc kubenswrapper[4833]: I0217 13:46:16.925243 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:16 crc kubenswrapper[4833]: I0217 13:46:16.925257 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:16 crc kubenswrapper[4833]: I0217 13:46:16.925267 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:16Z","lastTransitionTime":"2026-02-17T13:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.027467 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 16:42:28.66824518 +0000 UTC Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.027934 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.027967 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.027979 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.027996 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.028007 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:17Z","lastTransitionTime":"2026-02-17T13:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.041535 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:46:17 crc kubenswrapper[4833]: E0217 13:46:17.041638 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.130859 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.130907 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.130921 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.130938 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.130953 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:17Z","lastTransitionTime":"2026-02-17T13:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.233749 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.233796 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.233804 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.233821 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.233831 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:17Z","lastTransitionTime":"2026-02-17T13:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.266373 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.266419 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.266432 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.266450 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.266463 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:17Z","lastTransitionTime":"2026-02-17T13:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:17 crc kubenswrapper[4833]: E0217 13:46:17.278832 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"858a93ea-15f0-4bac-8fa3-badb79f68871\\\",\\\"systemUUID\\\":\\\"648b67a2-27e7-447a-8ad2-7acc2e737df4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:17Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.283074 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.283107 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.283116 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.283130 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.283138 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:17Z","lastTransitionTime":"2026-02-17T13:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:17 crc kubenswrapper[4833]: E0217 13:46:17.297070 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"858a93ea-15f0-4bac-8fa3-badb79f68871\\\",\\\"systemUUID\\\":\\\"648b67a2-27e7-447a-8ad2-7acc2e737df4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:17Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.300949 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.300990 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.301005 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.301023 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.301053 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:17Z","lastTransitionTime":"2026-02-17T13:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:17 crc kubenswrapper[4833]: E0217 13:46:17.313726 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"858a93ea-15f0-4bac-8fa3-badb79f68871\\\",\\\"systemUUID\\\":\\\"648b67a2-27e7-447a-8ad2-7acc2e737df4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:17Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.318465 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.318523 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.318535 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.318557 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.318569 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:17Z","lastTransitionTime":"2026-02-17T13:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:17 crc kubenswrapper[4833]: E0217 13:46:17.337607 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"858a93ea-15f0-4bac-8fa3-badb79f68871\\\",\\\"systemUUID\\\":\\\"648b67a2-27e7-447a-8ad2-7acc2e737df4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:17Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.340991 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.341028 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.341052 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.341070 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.341081 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:17Z","lastTransitionTime":"2026-02-17T13:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:17 crc kubenswrapper[4833]: E0217 13:46:17.351810 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"858a93ea-15f0-4bac-8fa3-badb79f68871\\\",\\\"systemUUID\\\":\\\"648b67a2-27e7-447a-8ad2-7acc2e737df4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:17Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:17 crc kubenswrapper[4833]: E0217 13:46:17.351927 4833 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.353586 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.353617 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.353630 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.353650 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.353663 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:17Z","lastTransitionTime":"2026-02-17T13:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.456652 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.456694 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.456706 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.456724 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.456738 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:17Z","lastTransitionTime":"2026-02-17T13:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.559382 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.559422 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.559434 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.559452 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.559467 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:17Z","lastTransitionTime":"2026-02-17T13:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.662876 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.662935 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.662945 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.662962 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.662975 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:17Z","lastTransitionTime":"2026-02-17T13:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.765477 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.765751 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.765817 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.765889 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.765965 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:17Z","lastTransitionTime":"2026-02-17T13:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.869017 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.869087 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.869099 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.869114 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.869125 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:17Z","lastTransitionTime":"2026-02-17T13:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.972258 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.972309 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.972320 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.972343 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:17 crc kubenswrapper[4833]: I0217 13:46:17.972354 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:17Z","lastTransitionTime":"2026-02-17T13:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:18 crc kubenswrapper[4833]: I0217 13:46:18.027606 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 14:09:03.193139937 +0000 UTC Feb 17 13:46:18 crc kubenswrapper[4833]: I0217 13:46:18.040899 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b7xf" Feb 17 13:46:18 crc kubenswrapper[4833]: I0217 13:46:18.040951 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:46:18 crc kubenswrapper[4833]: I0217 13:46:18.040913 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:46:18 crc kubenswrapper[4833]: E0217 13:46:18.041019 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b7xf" podUID="892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c" Feb 17 13:46:18 crc kubenswrapper[4833]: E0217 13:46:18.041121 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:46:18 crc kubenswrapper[4833]: E0217 13:46:18.041203 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:46:18 crc kubenswrapper[4833]: I0217 13:46:18.074686 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:18 crc kubenswrapper[4833]: I0217 13:46:18.074720 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:18 crc kubenswrapper[4833]: I0217 13:46:18.074730 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:18 crc kubenswrapper[4833]: I0217 13:46:18.074744 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:18 crc kubenswrapper[4833]: I0217 13:46:18.074755 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:18Z","lastTransitionTime":"2026-02-17T13:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:18 crc kubenswrapper[4833]: I0217 13:46:18.177523 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:18 crc kubenswrapper[4833]: I0217 13:46:18.177569 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:18 crc kubenswrapper[4833]: I0217 13:46:18.177581 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:18 crc kubenswrapper[4833]: I0217 13:46:18.177601 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:18 crc kubenswrapper[4833]: I0217 13:46:18.177613 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:18Z","lastTransitionTime":"2026-02-17T13:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:18 crc kubenswrapper[4833]: I0217 13:46:18.281095 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:18 crc kubenswrapper[4833]: I0217 13:46:18.281151 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:18 crc kubenswrapper[4833]: I0217 13:46:18.281160 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:18 crc kubenswrapper[4833]: I0217 13:46:18.281174 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:18 crc kubenswrapper[4833]: I0217 13:46:18.281185 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:18Z","lastTransitionTime":"2026-02-17T13:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:18 crc kubenswrapper[4833]: I0217 13:46:18.383582 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:18 crc kubenswrapper[4833]: I0217 13:46:18.383661 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:18 crc kubenswrapper[4833]: I0217 13:46:18.383685 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:18 crc kubenswrapper[4833]: I0217 13:46:18.383714 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:18 crc kubenswrapper[4833]: I0217 13:46:18.383735 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:18Z","lastTransitionTime":"2026-02-17T13:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:18 crc kubenswrapper[4833]: I0217 13:46:18.487514 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:18 crc kubenswrapper[4833]: I0217 13:46:18.487558 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:18 crc kubenswrapper[4833]: I0217 13:46:18.487571 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:18 crc kubenswrapper[4833]: I0217 13:46:18.487589 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:18 crc kubenswrapper[4833]: I0217 13:46:18.487605 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:18Z","lastTransitionTime":"2026-02-17T13:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:18 crc kubenswrapper[4833]: I0217 13:46:18.590308 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:18 crc kubenswrapper[4833]: I0217 13:46:18.590362 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:18 crc kubenswrapper[4833]: I0217 13:46:18.590375 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:18 crc kubenswrapper[4833]: I0217 13:46:18.590394 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:18 crc kubenswrapper[4833]: I0217 13:46:18.590404 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:18Z","lastTransitionTime":"2026-02-17T13:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:18 crc kubenswrapper[4833]: I0217 13:46:18.692691 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:18 crc kubenswrapper[4833]: I0217 13:46:18.692728 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:18 crc kubenswrapper[4833]: I0217 13:46:18.692737 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:18 crc kubenswrapper[4833]: I0217 13:46:18.692753 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:18 crc kubenswrapper[4833]: I0217 13:46:18.692763 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:18Z","lastTransitionTime":"2026-02-17T13:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:18 crc kubenswrapper[4833]: I0217 13:46:18.794464 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:18 crc kubenswrapper[4833]: I0217 13:46:18.794526 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:18 crc kubenswrapper[4833]: I0217 13:46:18.794547 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:18 crc kubenswrapper[4833]: I0217 13:46:18.794573 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:18 crc kubenswrapper[4833]: I0217 13:46:18.794592 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:18Z","lastTransitionTime":"2026-02-17T13:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:18 crc kubenswrapper[4833]: I0217 13:46:18.897121 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:18 crc kubenswrapper[4833]: I0217 13:46:18.897210 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:18 crc kubenswrapper[4833]: I0217 13:46:18.897228 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:18 crc kubenswrapper[4833]: I0217 13:46:18.897254 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:18 crc kubenswrapper[4833]: I0217 13:46:18.897276 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:18Z","lastTransitionTime":"2026-02-17T13:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:19 crc kubenswrapper[4833]: I0217 13:46:19.000028 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:19 crc kubenswrapper[4833]: I0217 13:46:19.000109 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:19 crc kubenswrapper[4833]: I0217 13:46:19.000121 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:19 crc kubenswrapper[4833]: I0217 13:46:19.000138 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:19 crc kubenswrapper[4833]: I0217 13:46:19.000150 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:19Z","lastTransitionTime":"2026-02-17T13:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:19 crc kubenswrapper[4833]: I0217 13:46:19.028956 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 16:19:42.380124831 +0000 UTC Feb 17 13:46:19 crc kubenswrapper[4833]: I0217 13:46:19.041532 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:46:19 crc kubenswrapper[4833]: E0217 13:46:19.041783 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:46:19 crc kubenswrapper[4833]: I0217 13:46:19.102442 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:19 crc kubenswrapper[4833]: I0217 13:46:19.102494 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:19 crc kubenswrapper[4833]: I0217 13:46:19.102514 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:19 crc kubenswrapper[4833]: I0217 13:46:19.102533 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:19 crc kubenswrapper[4833]: I0217 13:46:19.102546 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:19Z","lastTransitionTime":"2026-02-17T13:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:19 crc kubenswrapper[4833]: I0217 13:46:19.204973 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:19 crc kubenswrapper[4833]: I0217 13:46:19.205014 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:19 crc kubenswrapper[4833]: I0217 13:46:19.205027 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:19 crc kubenswrapper[4833]: I0217 13:46:19.205065 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:19 crc kubenswrapper[4833]: I0217 13:46:19.205081 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:19Z","lastTransitionTime":"2026-02-17T13:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:19 crc kubenswrapper[4833]: I0217 13:46:19.307296 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:19 crc kubenswrapper[4833]: I0217 13:46:19.307337 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:19 crc kubenswrapper[4833]: I0217 13:46:19.307395 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:19 crc kubenswrapper[4833]: I0217 13:46:19.307410 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:19 crc kubenswrapper[4833]: I0217 13:46:19.307421 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:19Z","lastTransitionTime":"2026-02-17T13:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:19 crc kubenswrapper[4833]: I0217 13:46:19.409604 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:19 crc kubenswrapper[4833]: I0217 13:46:19.409665 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:19 crc kubenswrapper[4833]: I0217 13:46:19.409687 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:19 crc kubenswrapper[4833]: I0217 13:46:19.409713 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:19 crc kubenswrapper[4833]: I0217 13:46:19.409732 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:19Z","lastTransitionTime":"2026-02-17T13:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:19 crc kubenswrapper[4833]: I0217 13:46:19.512971 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:19 crc kubenswrapper[4833]: I0217 13:46:19.513020 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:19 crc kubenswrapper[4833]: I0217 13:46:19.513063 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:19 crc kubenswrapper[4833]: I0217 13:46:19.513088 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:19 crc kubenswrapper[4833]: I0217 13:46:19.513106 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:19Z","lastTransitionTime":"2026-02-17T13:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:19 crc kubenswrapper[4833]: I0217 13:46:19.616006 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:19 crc kubenswrapper[4833]: I0217 13:46:19.616332 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:19 crc kubenswrapper[4833]: I0217 13:46:19.616361 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:19 crc kubenswrapper[4833]: I0217 13:46:19.616386 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:19 crc kubenswrapper[4833]: I0217 13:46:19.616404 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:19Z","lastTransitionTime":"2026-02-17T13:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:19 crc kubenswrapper[4833]: I0217 13:46:19.718779 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:19 crc kubenswrapper[4833]: I0217 13:46:19.718821 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:19 crc kubenswrapper[4833]: I0217 13:46:19.718832 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:19 crc kubenswrapper[4833]: I0217 13:46:19.718848 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:19 crc kubenswrapper[4833]: I0217 13:46:19.718859 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:19Z","lastTransitionTime":"2026-02-17T13:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:19 crc kubenswrapper[4833]: I0217 13:46:19.826344 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:19 crc kubenswrapper[4833]: I0217 13:46:19.826418 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:19 crc kubenswrapper[4833]: I0217 13:46:19.826455 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:19 crc kubenswrapper[4833]: I0217 13:46:19.826487 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:19 crc kubenswrapper[4833]: I0217 13:46:19.826511 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:19Z","lastTransitionTime":"2026-02-17T13:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:19 crc kubenswrapper[4833]: I0217 13:46:19.930210 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:19 crc kubenswrapper[4833]: I0217 13:46:19.930244 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:19 crc kubenswrapper[4833]: I0217 13:46:19.930254 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:19 crc kubenswrapper[4833]: I0217 13:46:19.930271 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:19 crc kubenswrapper[4833]: I0217 13:46:19.930281 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:19Z","lastTransitionTime":"2026-02-17T13:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:20 crc kubenswrapper[4833]: I0217 13:46:20.029625 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 22:57:41.064581952 +0000 UTC Feb 17 13:46:20 crc kubenswrapper[4833]: I0217 13:46:20.033314 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:20 crc kubenswrapper[4833]: I0217 13:46:20.033372 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:20 crc kubenswrapper[4833]: I0217 13:46:20.033390 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:20 crc kubenswrapper[4833]: I0217 13:46:20.033414 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:20 crc kubenswrapper[4833]: I0217 13:46:20.033428 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:20Z","lastTransitionTime":"2026-02-17T13:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:20 crc kubenswrapper[4833]: I0217 13:46:20.041514 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b7xf" Feb 17 13:46:20 crc kubenswrapper[4833]: I0217 13:46:20.041560 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:46:20 crc kubenswrapper[4833]: I0217 13:46:20.041518 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:46:20 crc kubenswrapper[4833]: E0217 13:46:20.041664 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b7xf" podUID="892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c" Feb 17 13:46:20 crc kubenswrapper[4833]: E0217 13:46:20.041767 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:46:20 crc kubenswrapper[4833]: E0217 13:46:20.041918 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:46:20 crc kubenswrapper[4833]: I0217 13:46:20.136743 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:20 crc kubenswrapper[4833]: I0217 13:46:20.136842 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:20 crc kubenswrapper[4833]: I0217 13:46:20.136873 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:20 crc kubenswrapper[4833]: I0217 13:46:20.136906 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:20 crc kubenswrapper[4833]: I0217 13:46:20.136929 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:20Z","lastTransitionTime":"2026-02-17T13:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:20 crc kubenswrapper[4833]: I0217 13:46:20.240285 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:20 crc kubenswrapper[4833]: I0217 13:46:20.240332 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:20 crc kubenswrapper[4833]: I0217 13:46:20.240342 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:20 crc kubenswrapper[4833]: I0217 13:46:20.240362 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:20 crc kubenswrapper[4833]: I0217 13:46:20.240430 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:20Z","lastTransitionTime":"2026-02-17T13:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:20 crc kubenswrapper[4833]: I0217 13:46:20.343694 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:20 crc kubenswrapper[4833]: I0217 13:46:20.343743 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:20 crc kubenswrapper[4833]: I0217 13:46:20.343756 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:20 crc kubenswrapper[4833]: I0217 13:46:20.343774 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:20 crc kubenswrapper[4833]: I0217 13:46:20.343786 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:20Z","lastTransitionTime":"2026-02-17T13:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:20 crc kubenswrapper[4833]: I0217 13:46:20.445572 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:20 crc kubenswrapper[4833]: I0217 13:46:20.445628 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:20 crc kubenswrapper[4833]: I0217 13:46:20.445644 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:20 crc kubenswrapper[4833]: I0217 13:46:20.445666 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:20 crc kubenswrapper[4833]: I0217 13:46:20.445682 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:20Z","lastTransitionTime":"2026-02-17T13:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:20 crc kubenswrapper[4833]: I0217 13:46:20.548143 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:20 crc kubenswrapper[4833]: I0217 13:46:20.548182 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:20 crc kubenswrapper[4833]: I0217 13:46:20.548195 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:20 crc kubenswrapper[4833]: I0217 13:46:20.548211 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:20 crc kubenswrapper[4833]: I0217 13:46:20.548223 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:20Z","lastTransitionTime":"2026-02-17T13:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:20 crc kubenswrapper[4833]: I0217 13:46:20.650674 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:20 crc kubenswrapper[4833]: I0217 13:46:20.650726 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:20 crc kubenswrapper[4833]: I0217 13:46:20.650739 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:20 crc kubenswrapper[4833]: I0217 13:46:20.650757 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:20 crc kubenswrapper[4833]: I0217 13:46:20.650771 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:20Z","lastTransitionTime":"2026-02-17T13:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:20 crc kubenswrapper[4833]: I0217 13:46:20.754571 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:20 crc kubenswrapper[4833]: I0217 13:46:20.755084 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:20 crc kubenswrapper[4833]: I0217 13:46:20.755108 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:20 crc kubenswrapper[4833]: I0217 13:46:20.755133 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:20 crc kubenswrapper[4833]: I0217 13:46:20.755151 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:20Z","lastTransitionTime":"2026-02-17T13:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:20 crc kubenswrapper[4833]: I0217 13:46:20.857663 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:20 crc kubenswrapper[4833]: I0217 13:46:20.857902 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:20 crc kubenswrapper[4833]: I0217 13:46:20.857994 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:20 crc kubenswrapper[4833]: I0217 13:46:20.858086 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:20 crc kubenswrapper[4833]: I0217 13:46:20.858144 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:20Z","lastTransitionTime":"2026-02-17T13:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:20 crc kubenswrapper[4833]: I0217 13:46:20.961467 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:20 crc kubenswrapper[4833]: I0217 13:46:20.961896 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:20 crc kubenswrapper[4833]: I0217 13:46:20.962099 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:20 crc kubenswrapper[4833]: I0217 13:46:20.962250 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:20 crc kubenswrapper[4833]: I0217 13:46:20.962380 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:20Z","lastTransitionTime":"2026-02-17T13:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.030236 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 20:08:28.657721595 +0000 UTC Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.040641 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:46:21 crc kubenswrapper[4833]: E0217 13:46:21.040780 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.057933 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4a1ca83-1919-4f9c-82de-c849cbd50e70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4ea9f4f4051db850a31f5e1081095664c8764e1033e81823e75cd0072d13a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nq5ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89a1061eccd396579d2995d2d4ee3bd83f65f7e06a951b4d77897e687a7dcb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nq5ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nmzvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:21Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.064633 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.064661 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.064670 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.064684 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.064693 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:21Z","lastTransitionTime":"2026-02-17T13:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.075373 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wxvlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eb74ba2-a87a-415f-8978-8ef706346aa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f08f2aa3f0028f17b13b73c760f1471c3f335f3ae02be305bddd63552ec7330b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2569baf4273e04fbca0b38e0a65c945ba4b2767cba6f2b6647012328c041bab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2569baf4273e04fbca0b38e0a65c945ba4b2767cba6f2b6647012328c041bab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec44770454d862d13715225217505d12ffefd3f5a8dae0912a7ba8a9fbccb0bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec44770454d862d13715225217505d12ffefd3f5a8dae0912a7ba8a9fbccb0bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d28180d5fc08486a1492239ab1107c9b3856334091da9d2ad7ae0e6086fbd00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d28180d5fc08486a1492239ab1107c9b3856334091da9d2ad7ae0e6086fbd00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79641e6aa81b861f7dd3dadd0cc23f677727909d85f017c761bcbbb19450b5e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79641e6aa81b861f7dd3dadd0cc23f677727909d85f017c761bcbbb19450b5e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f4ea02b7d74057eddb0946756236afdb129b5ad326b153d66357ee42caca78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f4ea02b7d74057eddb0946756236afdb129b5ad326b153d66357ee42caca78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcd939d8a8deab6a94b6df237bd627850f5bb37bd29bd0988d16452dcbd87c9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcd939d8a8deab6a94b6df237bd627850f5bb37bd29bd0988d16452dcbd87c9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wxvlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:21Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.091253 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c1afc7-6071-464b-aedc-11234d869afe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e36c4174a69c22c608db515a3fe558ec0292c53ae56a43a945c8bb19bf4cdf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bb5bc7e6f182dd6271a6d16d8441e1f6833a32fd03858a3b91d16a896339c2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e477bf1f25d554946b3657a5bd18f922b18483902fd4f9c05a452a299cb4d920\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2260fe18b8f9f829f7dde85c9bdfc32895c8bfe48e0986c8f8205708e078ff37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:21Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.108126 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a207ffb6a17f606fde32eec5ae4222e2fdf02d79f61e4f959be55814c14daf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:21Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.124095 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b9b39aa9ac4f718b75bfeffba8e09bf2fefca68625037f565f3b237b6275db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:21Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.137675 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vx9xx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d13dbe6c-a57b-4011-9987-193ccf4939f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6637ecd7299bd61ba9b47f093048b122c90fc98990eb22155e8467c1dfd75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w7kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vx9xx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:21Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.153407 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:21Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.167853 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.167901 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.167912 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.167929 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.167943 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:21Z","lastTransitionTime":"2026-02-17T13:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.168618 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e5a2da13ff704ed5e89100bd3a88a6349d446c2af48f88acb399efb43dc42d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac53462f7a94ff04dc94b5065169da7ec43985a5a7e7dba779f23614b61b78fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:21Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.182103 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:21Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.201864 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72c5918a-056f-446c-b138-a1be7140a5b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0c4e266b444d8ac1a67f9b5a6705f4b871e3930daf7707cd30b589a4cce24ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a89a54d9bedad8afe45f34ce0bc2324d0ca8590c328aad9d1557b2d92e5b81fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5c74f469d2d07cfd265ee1f625b957efc0f9d1f0efd5f483c962abdfd66a080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ee6e033fa9b29f59b186e39ec83d162b40e36aab28bb1e568a54c351a8c0c79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9da7ba3cbbca328926f2372a7e581793373ec876c961f7988617c668d958b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42e357a961b833cb840c8b33ce0db65d57cc5fb37789b00a6bcd3eb67d8e33f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79b340d04500cf506784f0354e892c3ea66b73dcfd1d22b17e464cc29175b9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79b340d04500cf506784f0354e892c3ea66b73dcfd1d22b17e464cc29175b9e8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T13:45:57Z\\\",\\\"message\\\":\\\"*v1.Pod openshift-etcd/etcd-crc\\\\nI0217 13:45:57.058971 6481 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"54fbe873-7e6d-475f-a0ad-8dd5f06d850d\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/cluster-autoscaler-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/cluster-autoscaler-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.245\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.245\\\\\\\", Port:9192, Template:(*services.Template)(nil\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7r9gt_openshift-ovn-kubernetes(72c5918a-056f-446c-b138-a1be7140a5b0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fdc1a29a9f5b3be51beb5d28d7a09f671e4f9effeb9ed122da196da8623abe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7r9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:21Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.217175 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23fd4be9-debc-405d-ac05-5a5160593231\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f18bb5a144ef41de921b7b92ee560b76e3f9f9a626be4d5638db46e4688d256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84373535efe637bbd5cf8649523941f0e73ad51f06dd45c913741e2a8e7b775f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090ad37f43c1a891b1a2922b2a367e1b52ce56fd6a53e9e749f819bf281136a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://436df1e343e5ca9347dd98b9da03c74816b0bc0ec6cecd94dc3b21c38eea9b51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babf66dd8ad93003e9766e885c7626d4f6dab5bc46a6d714449a01065d7afea9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"message\\\":\\\"W0217 13:45:14.176886 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 13:45:14.177326 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771335914 cert, and key in /tmp/serving-cert-2529326345/serving-signer.crt, /tmp/serving-cert-2529326345/serving-signer.key\\\\nI0217 13:45:14.372413 1 observer_polling.go:159] Starting file observer\\\\nW0217 13:45:14.378661 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 13:45:14.378888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:45:14.382093 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529326345/tls.crt::/tmp/serving-cert-2529326345/tls.key\\\\\\\"\\\\nF0217 13:45:14.604297 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0450fd6122e12877e07084f693f67e63ac70bcecc962f7da5a32241ba14553\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:21Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.238372 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e08f74a-75d7-4dce-9297-27140e91962a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a2013b70df34194f895ea7c2e5f2650966fa2d06518733fa4411893e5b2b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbc961a623ca1f4237f238856a41acfb724aadd5fa3feda70410c352b6f750c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80bdd44d3b326c9bb61f21b221c63d4647e2cc6edc95fe568b90e76f4329cf1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9bc54d1057546b9c8d7d1527bb2ae93c1c713b4d6e5879477157af90fc0d89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e88eaef317b2beb0c55811634e6752439b7c7725072f55a42f5ab49c2ac2385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:21Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.251830 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5991f03-6fb7-4425-b041-26d760e380ae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae2cc390253214c6979cba64584e59e0342c1f750b16c569acfd885cb6b36c31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9702bc05160617e0c8ac8fd3d9a81244b5be2bf955ef58f9408fc0b42bea6609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5529e4fe84d20c564ff55b47e32df67ca6aac40d2629a6b5bf96e03a64b79676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e89d83530ec38b869e22ff9125ce8e680c2a274e471aedd447c9daee233acb1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e89d83530ec38b869e22ff9125ce8e680c2a274e471aedd447c9daee233acb1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:21Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.265726 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:21Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.270571 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.270607 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.270620 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.270635 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.270645 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:21Z","lastTransitionTime":"2026-02-17T13:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.278492 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wlt4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d2bf39d36b6e8998e442f36f84198a363d7f22f56a05957a4a242459e6ff8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9dxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wlt4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:21Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.292763 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5gxjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03c9f579-21ad-4977-a5e8-db9272a08557\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7400b87694dd74f41819e23ce3f617021e028cdcbfdad65b2213249f35d176d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s6vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5gxjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:21Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.308154 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xwfsh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"962a90ec-217a-4df0-8d83-2e6663953088\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cf73503bb300f6f71385f8da17c04a4b5d84691d5d7ca3b363b2335e4905c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srrv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc181ab232df3bc23da46eabfb43d64f7c4e674152c27338d8ff5faf1c477bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srrv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xwfsh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:21Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.321683 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4b7xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft275\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft275\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4b7xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:21Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.373510 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.373563 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.373577 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.373598 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.373611 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:21Z","lastTransitionTime":"2026-02-17T13:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.418402 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wlt4c_a3b8d3ca-f768-4129-9c1a-b4866dd852d4/kube-multus/0.log" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.418693 4833 generic.go:334] "Generic (PLEG): container finished" podID="a3b8d3ca-f768-4129-9c1a-b4866dd852d4" containerID="26d2bf39d36b6e8998e442f36f84198a363d7f22f56a05957a4a242459e6ff8b" exitCode=1 Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.418746 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-wlt4c" event={"ID":"a3b8d3ca-f768-4129-9c1a-b4866dd852d4","Type":"ContainerDied","Data":"26d2bf39d36b6e8998e442f36f84198a363d7f22f56a05957a4a242459e6ff8b"} Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.419363 4833 scope.go:117] "RemoveContainer" containerID="26d2bf39d36b6e8998e442f36f84198a363d7f22f56a05957a4a242459e6ff8b" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.437731 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b9b39aa9ac4f718b75bfeffba8e09bf2fefca68625037f565f3b237b6275db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:21Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.450129 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vx9xx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d13dbe6c-a57b-4011-9987-193ccf4939f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6637ecd7299bd61ba9b47f093048b122c90fc98990eb22155e8467c1dfd75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w7kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vx9xx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:21Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.465964 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4a1ca83-1919-4f9c-82de-c849cbd50e70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4ea9f4f4051db850a31f5e1081095664c8764e1033e81823e75cd0072d13a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nq5ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89a1061eccd396579d2995d2d4ee3bd83f65f7e06a951b4d77897e687a7dcb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nq5ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nmzvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:21Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.480091 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.480150 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.480162 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.480179 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.480188 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:21Z","lastTransitionTime":"2026-02-17T13:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.482952 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wxvlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eb74ba2-a87a-415f-8978-8ef706346aa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f08f2aa3f0028f17b13b73c760f1471c3f335f3ae02be305bddd63552ec7330b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2569baf4273e04fbca0b38e0a65c945ba4b2767cba6f2b6647012328c041bab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2569baf4273e04fbca0b38e0a65c945ba4b2767cba6f2b6647012328c041bab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec44770454d862d13715225217505d12ffefd3f5a8dae0912a7ba8a9fbccb0bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec44770454d862d13715225217505d12ffefd3f5a8dae0912a7ba8a9fbccb0bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d28180d5fc08486a1492239ab1107c9b3856334091da9d2ad7ae0e6086fbd00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d28180d5fc08486a1492239ab1107c9b3856334091da9d2ad7ae0e6086fbd00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79641e6aa81b861f7dd3dadd0cc23f677727909d85f017c761bcbbb19450b5e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79641e6aa81b861f7dd3dadd0cc23f677727909d85f017c761bcbbb19450b5e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f4ea02b7d74057eddb0946756236afdb129b5ad326b153d66357ee42caca78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f4ea02b7d74057eddb0946756236afdb129b5ad326b153d66357ee42caca78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcd939d8a8deab6a94b6df237bd627850f5bb37bd29bd0988d16452dcbd87c9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcd939d8a8deab6a94b6df237bd627850f5bb37bd29bd0988d16452dcbd87c9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wxvlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:21Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.496670 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c1afc7-6071-464b-aedc-11234d869afe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e36c4174a69c22c608db515a3fe558ec0292c53ae56a43a945c8bb19bf4cdf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bb5bc7e6f182dd6271a6d16d8441e1f6833a32fd03858a3b91d16a896339c2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e477bf1f25d554946b3657a5bd18f922b18483902fd4f9c05a452a299cb4d920\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2260fe18b8f9f829f7dde85c9bdfc32895c8bfe48e0986c8f8205708e078ff37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:21Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.510498 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a207ffb6a17f606fde32eec5ae4222e2fdf02d79f61e4f959be55814c14daf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:21Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.524509 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:21Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.538865 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e5a2da13ff704ed5e89100bd3a88a6349d446c2af48f88acb399efb43dc42d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac53462f7a94ff04dc94b5065169da7ec43985a5a7e7dba779f23614b61b78fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:21Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.550002 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:21Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.561383 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:21Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.574247 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wlt4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d2bf39d36b6e8998e442f36f84198a363d7f22f56a05957a4a242459e6ff8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26d2bf39d36b6e8998e442f36f84198a363d7f22f56a05957a4a242459e6ff8b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T13:46:20Z\\\",\\\"message\\\":\\\"2026-02-17T13:45:34+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9e0f4ef6-02a5-40ec-920b-c377ffde18f5\\\\n2026-02-17T13:45:34+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9e0f4ef6-02a5-40ec-920b-c377ffde18f5 to /host/opt/cni/bin/\\\\n2026-02-17T13:45:35Z [verbose] multus-daemon started\\\\n2026-02-17T13:45:35Z [verbose] Readiness Indicator file check\\\\n2026-02-17T13:46:20Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9dxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wlt4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:21Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.582980 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.583144 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.583232 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.583348 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.583450 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:21Z","lastTransitionTime":"2026-02-17T13:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.593860 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72c5918a-056f-446c-b138-a1be7140a5b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0c4e266b444d8ac1a67f9b5a6705f4b871e3930daf7707cd30b589a4cce24ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a89a54d9bedad8afe45f34ce0bc2324d0ca8590c328aad9d1557b2d92e5b81fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5c74f469d2d07cfd265ee1f625b957efc0f9d1f0efd5f483c962abdfd66a080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ee6e033fa9b29f59b186e39ec83d162b40e36aab28bb1e568a54c351a8c0c79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9da7ba3cbbca328926f2372a7e581793373ec876c961f7988617c668d958b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42e357a961b833cb840c8b33ce0db65d57cc5fb37789b00a6bcd3eb67d8e33f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79b340d04500cf506784f0354e892c3ea66b73dcfd1d22b17e464cc29175b9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79b340d04500cf506784f0354e892c3ea66b73dcfd1d22b17e464cc29175b9e8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T13:45:57Z\\\",\\\"message\\\":\\\"*v1.Pod openshift-etcd/etcd-crc\\\\nI0217 13:45:57.058971 6481 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"54fbe873-7e6d-475f-a0ad-8dd5f06d850d\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/cluster-autoscaler-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/cluster-autoscaler-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.245\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.245\\\\\\\", Port:9192, Template:(*services.Template)(nil\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7r9gt_openshift-ovn-kubernetes(72c5918a-056f-446c-b138-a1be7140a5b0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fdc1a29a9f5b3be51beb5d28d7a09f671e4f9effeb9ed122da196da8623abe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7r9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:21Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.609518 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23fd4be9-debc-405d-ac05-5a5160593231\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f18bb5a144ef41de921b7b92ee560b76e3f9f9a626be4d5638db46e4688d256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84373535efe637bbd5cf8649523941f0e73ad51f06dd45c913741e2a8e7b775f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090ad37f43c1a891b1a2922b2a367e1b52ce56fd6a53e9e749f819bf281136a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://436df1e343e5ca9347dd98b9da03c74816b0bc0ec6cecd94dc3b21c38eea9b51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babf66dd8ad93003e9766e885c7626d4f6dab5bc46a6d714449a01065d7afea9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"message\\\":\\\"W0217 13:45:14.176886 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 13:45:14.177326 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771335914 cert, and key in /tmp/serving-cert-2529326345/serving-signer.crt, /tmp/serving-cert-2529326345/serving-signer.key\\\\nI0217 13:45:14.372413 1 observer_polling.go:159] Starting file observer\\\\nW0217 13:45:14.378661 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 13:45:14.378888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:45:14.382093 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529326345/tls.crt::/tmp/serving-cert-2529326345/tls.key\\\\\\\"\\\\nF0217 13:45:14.604297 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0450fd6122e12877e07084f693f67e63ac70bcecc962f7da5a32241ba14553\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:21Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.627836 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e08f74a-75d7-4dce-9297-27140e91962a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a2013b70df34194f895ea7c2e5f2650966fa2d06518733fa4411893e5b2b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbc961a623ca1f4237f238856a41acfb724aadd5fa3feda70410c352b6f750c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80bdd44d3b326c9bb61f21b221c63d4647e2cc6edc95fe568b90e76f4329cf1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9bc54d1057546b9c8d7d1527bb2ae93c1c713b4d6e5879477157af90fc0d89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e88eaef317b2beb0c55811634e6752439b7c7725072f55a42f5ab49c2ac2385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:21Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.642502 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5991f03-6fb7-4425-b041-26d760e380ae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae2cc390253214c6979cba64584e59e0342c1f750b16c569acfd885cb6b36c31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9702bc05160617e0c8ac8fd3d9a81244b5be2bf955ef58f9408fc0b42bea6609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5529e4fe84d20c564ff55b47e32df67ca6aac40d2629a6b5bf96e03a64b79676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e89d83530ec38b869e22ff9125ce8e680c2a274e471aedd447c9daee233acb1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e89d83530ec38b869e22ff9125ce8e680c2a274e471aedd447c9daee233acb1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:21Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.656281 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5gxjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03c9f579-21ad-4977-a5e8-db9272a08557\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7400b87694dd74f41819e23ce3f617021e028cdcbfdad65b2213249f35d176d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s6vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5gxjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:21Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.668594 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xwfsh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"962a90ec-217a-4df0-8d83-2e6663953088\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cf73503bb300f6f71385f8da17c04a4b5d84691d5d7ca3b363b2335e4905c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srrv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc181ab232df3bc23da46eabfb43d64f7c4e674152c27338d8ff5faf1c477bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srrv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xwfsh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:21Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.679544 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4b7xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft275\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft275\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4b7xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:21Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.686111 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.686261 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.686353 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.686439 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.686507 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:21Z","lastTransitionTime":"2026-02-17T13:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.788767 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.788941 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.789079 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.789178 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.789263 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:21Z","lastTransitionTime":"2026-02-17T13:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.892533 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.892789 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.892859 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.892931 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.892999 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:21Z","lastTransitionTime":"2026-02-17T13:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.996891 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.996975 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.996999 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.997029 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:21 crc kubenswrapper[4833]: I0217 13:46:21.997097 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:21Z","lastTransitionTime":"2026-02-17T13:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:22 crc kubenswrapper[4833]: I0217 13:46:22.031555 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 17:08:08.207012172 +0000 UTC Feb 17 13:46:22 crc kubenswrapper[4833]: I0217 13:46:22.040834 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:46:22 crc kubenswrapper[4833]: I0217 13:46:22.040869 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:46:22 crc kubenswrapper[4833]: I0217 13:46:22.040927 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b7xf" Feb 17 13:46:22 crc kubenswrapper[4833]: E0217 13:46:22.041126 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:46:22 crc kubenswrapper[4833]: E0217 13:46:22.041200 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:46:22 crc kubenswrapper[4833]: E0217 13:46:22.041279 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b7xf" podUID="892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c" Feb 17 13:46:22 crc kubenswrapper[4833]: I0217 13:46:22.100364 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:22 crc kubenswrapper[4833]: I0217 13:46:22.100425 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:22 crc kubenswrapper[4833]: I0217 13:46:22.100441 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:22 crc kubenswrapper[4833]: I0217 13:46:22.100464 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:22 crc kubenswrapper[4833]: I0217 13:46:22.100482 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:22Z","lastTransitionTime":"2026-02-17T13:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:22 crc kubenswrapper[4833]: I0217 13:46:22.202936 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:22 crc kubenswrapper[4833]: I0217 13:46:22.202970 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:22 crc kubenswrapper[4833]: I0217 13:46:22.202979 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:22 crc kubenswrapper[4833]: I0217 13:46:22.202991 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:22 crc kubenswrapper[4833]: I0217 13:46:22.203001 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:22Z","lastTransitionTime":"2026-02-17T13:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:22 crc kubenswrapper[4833]: I0217 13:46:22.305576 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:22 crc kubenswrapper[4833]: I0217 13:46:22.305618 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:22 crc kubenswrapper[4833]: I0217 13:46:22.305628 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:22 crc kubenswrapper[4833]: I0217 13:46:22.305642 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:22 crc kubenswrapper[4833]: I0217 13:46:22.305652 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:22Z","lastTransitionTime":"2026-02-17T13:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:22 crc kubenswrapper[4833]: I0217 13:46:22.407406 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:22 crc kubenswrapper[4833]: I0217 13:46:22.407458 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:22 crc kubenswrapper[4833]: I0217 13:46:22.407475 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:22 crc kubenswrapper[4833]: I0217 13:46:22.407503 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:22 crc kubenswrapper[4833]: I0217 13:46:22.407525 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:22Z","lastTransitionTime":"2026-02-17T13:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:22 crc kubenswrapper[4833]: I0217 13:46:22.423795 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wlt4c_a3b8d3ca-f768-4129-9c1a-b4866dd852d4/kube-multus/0.log" Feb 17 13:46:22 crc kubenswrapper[4833]: I0217 13:46:22.423851 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-wlt4c" event={"ID":"a3b8d3ca-f768-4129-9c1a-b4866dd852d4","Type":"ContainerStarted","Data":"efd76798e54bfcbad6d3a5f07396fe8579adcdb3d5bab3c303a9d31ad242e830"} Feb 17 13:46:22 crc kubenswrapper[4833]: I0217 13:46:22.439779 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:22Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:22 crc kubenswrapper[4833]: I0217 13:46:22.452813 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e5a2da13ff704ed5e89100bd3a88a6349d446c2af48f88acb399efb43dc42d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac53462f7a94ff04dc94b5065169da7ec43985a5a7e7dba779f23614b61b78fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:22Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:22 crc kubenswrapper[4833]: I0217 13:46:22.465259 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:22Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:22 crc kubenswrapper[4833]: I0217 13:46:22.477303 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23fd4be9-debc-405d-ac05-5a5160593231\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f18bb5a144ef41de921b7b92ee560b76e3f9f9a626be4d5638db46e4688d256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84373535efe637bbd5cf8649523941f0e73ad51f06dd45c913741e2a8e7b775f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090ad37f43c1a891b1a2922b2a367e1b52ce56fd6a53e9e749f819bf281136a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://436df1e343e5ca9347dd98b9da03c74816b0bc0ec6cecd94dc3b21c38eea9b51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babf66dd8ad93003e9766e885c7626d4f6dab5bc46a6d714449a01065d7afea9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"message\\\":\\\"W0217 13:45:14.176886 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 13:45:14.177326 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771335914 cert, and key in /tmp/serving-cert-2529326345/serving-signer.crt, /tmp/serving-cert-2529326345/serving-signer.key\\\\nI0217 13:45:14.372413 1 observer_polling.go:159] Starting file observer\\\\nW0217 13:45:14.378661 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 13:45:14.378888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:45:14.382093 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529326345/tls.crt::/tmp/serving-cert-2529326345/tls.key\\\\\\\"\\\\nF0217 13:45:14.604297 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0450fd6122e12877e07084f693f67e63ac70bcecc962f7da5a32241ba14553\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:22Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:22 crc kubenswrapper[4833]: I0217 13:46:22.496398 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e08f74a-75d7-4dce-9297-27140e91962a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a2013b70df34194f895ea7c2e5f2650966fa2d06518733fa4411893e5b2b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbc961a623ca1f4237f238856a41acfb724aadd5fa3feda70410c352b6f750c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80bdd44d3b326c9bb61f21b221c63d4647e2cc6edc95fe568b90e76f4329cf1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9bc54d1057546b9c8d7d1527bb2ae93c1c713b4d6e5879477157af90fc0d89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e88eaef317b2beb0c55811634e6752439b7c7725072f55a42f5ab49c2ac2385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:22Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:22 crc kubenswrapper[4833]: I0217 13:46:22.507928 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5991f03-6fb7-4425-b041-26d760e380ae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae2cc390253214c6979cba64584e59e0342c1f750b16c569acfd885cb6b36c31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9702bc05160617e0c8ac8fd3d9a81244b5be2bf955ef58f9408fc0b42bea6609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5529e4fe84d20c564ff55b47e32df67ca6aac40d2629a6b5bf96e03a64b79676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e89d83530ec38b869e22ff9125ce8e680c2a274e471aedd447c9daee233acb1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e89d83530ec38b869e22ff9125ce8e680c2a274e471aedd447c9daee233acb1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:22Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:22 crc kubenswrapper[4833]: I0217 13:46:22.509753 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:22 crc kubenswrapper[4833]: I0217 13:46:22.509789 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:22 crc kubenswrapper[4833]: I0217 13:46:22.509800 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:22 crc kubenswrapper[4833]: I0217 13:46:22.509817 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:22 crc kubenswrapper[4833]: I0217 13:46:22.509829 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:22Z","lastTransitionTime":"2026-02-17T13:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:22 crc kubenswrapper[4833]: I0217 13:46:22.523727 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:22Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:22 crc kubenswrapper[4833]: I0217 13:46:22.536256 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wlt4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efd76798e54bfcbad6d3a5f07396fe8579adcdb3d5bab3c303a9d31ad242e830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26d2bf39d36b6e8998e442f36f84198a363d7f22f56a05957a4a242459e6ff8b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T13:46:20Z\\\",\\\"message\\\":\\\"2026-02-17T13:45:34+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9e0f4ef6-02a5-40ec-920b-c377ffde18f5\\\\n2026-02-17T13:45:34+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9e0f4ef6-02a5-40ec-920b-c377ffde18f5 to /host/opt/cni/bin/\\\\n2026-02-17T13:45:35Z [verbose] multus-daemon started\\\\n2026-02-17T13:45:35Z [verbose] Readiness Indicator file check\\\\n2026-02-17T13:46:20Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9dxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wlt4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:22Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:22 crc kubenswrapper[4833]: I0217 13:46:22.554595 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72c5918a-056f-446c-b138-a1be7140a5b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0c4e266b444d8ac1a67f9b5a6705f4b871e3930daf7707cd30b589a4cce24ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a89a54d9bedad8afe45f34ce0bc2324d0ca8590c328aad9d1557b2d92e5b81fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5c74f469d2d07cfd265ee1f625b957efc0f9d1f0efd5f483c962abdfd66a080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ee6e033fa9b29f59b186e39ec83d162b40e36aab28bb1e568a54c351a8c0c79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9da7ba3cbbca328926f2372a7e581793373ec876c961f7988617c668d958b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42e357a961b833cb840c8b33ce0db65d57cc5fb37789b00a6bcd3eb67d8e33f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79b340d04500cf506784f0354e892c3ea66b73dcfd1d22b17e464cc29175b9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79b340d04500cf506784f0354e892c3ea66b73dcfd1d22b17e464cc29175b9e8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T13:45:57Z\\\",\\\"message\\\":\\\"*v1.Pod openshift-etcd/etcd-crc\\\\nI0217 13:45:57.058971 6481 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"54fbe873-7e6d-475f-a0ad-8dd5f06d850d\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/cluster-autoscaler-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/cluster-autoscaler-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.245\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.245\\\\\\\", Port:9192, Template:(*services.Template)(nil\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7r9gt_openshift-ovn-kubernetes(72c5918a-056f-446c-b138-a1be7140a5b0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fdc1a29a9f5b3be51beb5d28d7a09f671e4f9effeb9ed122da196da8623abe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7r9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:22Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:22 crc kubenswrapper[4833]: I0217 13:46:22.563797 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5gxjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03c9f579-21ad-4977-a5e8-db9272a08557\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7400b87694dd74f41819e23ce3f617021e028cdcbfdad65b2213249f35d176d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s6vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5gxjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:22Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:22 crc kubenswrapper[4833]: I0217 13:46:22.573153 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xwfsh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"962a90ec-217a-4df0-8d83-2e6663953088\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cf73503bb300f6f71385f8da17c04a4b5d84691d5d7ca3b363b2335e4905c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srrv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc181ab232df3bc23da46eabfb43d64f7c4e674152c27338d8ff5faf1c477bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srrv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xwfsh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:22Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:22 crc kubenswrapper[4833]: I0217 13:46:22.582266 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4b7xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft275\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft275\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4b7xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:22Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:22 crc kubenswrapper[4833]: I0217 13:46:22.592015 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c1afc7-6071-464b-aedc-11234d869afe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e36c4174a69c22c608db515a3fe558ec0292c53ae56a43a945c8bb19bf4cdf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bb5bc7e6f182dd6271a6d16d8441e1f6833a32fd03858a3b91d16a896339c2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e477bf1f25d554946b3657a5bd18f922b18483902fd4f9c05a452a299cb4d920\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2260fe18b8f9f829f7dde85c9bdfc32895c8bfe48e0986c8f8205708e078ff37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:22Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:22 crc kubenswrapper[4833]: I0217 13:46:22.602285 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a207ffb6a17f606fde32eec5ae4222e2fdf02d79f61e4f959be55814c14daf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:22Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:22 crc kubenswrapper[4833]: I0217 13:46:22.611374 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:22 crc kubenswrapper[4833]: I0217 13:46:22.611418 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:22 crc kubenswrapper[4833]: I0217 13:46:22.611453 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:22 crc kubenswrapper[4833]: I0217 13:46:22.611473 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:22 crc kubenswrapper[4833]: I0217 13:46:22.611485 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:22Z","lastTransitionTime":"2026-02-17T13:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:22 crc kubenswrapper[4833]: I0217 13:46:22.612305 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b9b39aa9ac4f718b75bfeffba8e09bf2fefca68625037f565f3b237b6275db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:22Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:22 crc kubenswrapper[4833]: I0217 13:46:22.621519 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vx9xx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d13dbe6c-a57b-4011-9987-193ccf4939f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6637ecd7299bd61ba9b47f093048b122c90fc98990eb22155e8467c1dfd75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w7kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vx9xx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:22Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:22 crc kubenswrapper[4833]: I0217 13:46:22.630938 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4a1ca83-1919-4f9c-82de-c849cbd50e70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4ea9f4f4051db850a31f5e1081095664c8764e1033e81823e75cd0072d13a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nq5ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89a1061eccd396579d2995d2d4ee3bd83f65f7e06a951b4d77897e687a7dcb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nq5ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nmzvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:22Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:22 crc kubenswrapper[4833]: I0217 13:46:22.646073 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wxvlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eb74ba2-a87a-415f-8978-8ef706346aa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f08f2aa3f0028f17b13b73c760f1471c3f335f3ae02be305bddd63552ec7330b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2569baf4273e04fbca0b38e0a65c945ba4b2767cba6f2b6647012328c041bab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2569baf4273e04fbca0b38e0a65c945ba4b2767cba6f2b6647012328c041bab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec44770454d862d13715225217505d12ffefd3f5a8dae0912a7ba8a9fbccb0bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec44770454d862d13715225217505d12ffefd3f5a8dae0912a7ba8a9fbccb0bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d28180d5fc08486a1492239ab1107c9b3856334091da9d2ad7ae0e6086fbd00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d28180d5fc08486a1492239ab1107c9b3856334091da9d2ad7ae0e6086fbd00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79641e6aa81b861f7dd3dadd0cc23f677727909d85f017c761bcbbb19450b5e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79641e6aa81b861f7dd3dadd0cc23f677727909d85f017c761bcbbb19450b5e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f4ea02b7d74057eddb0946756236afdb129b5ad326b153d66357ee42caca78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f4ea02b7d74057eddb0946756236afdb129b5ad326b153d66357ee42caca78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcd939d8a8deab6a94b6df237bd627850f5bb37bd29bd0988d16452dcbd87c9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcd939d8a8deab6a94b6df237bd627850f5bb37bd29bd0988d16452dcbd87c9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wxvlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:22Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:22 crc kubenswrapper[4833]: I0217 13:46:22.714163 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:22 crc kubenswrapper[4833]: I0217 13:46:22.714203 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:22 crc kubenswrapper[4833]: I0217 13:46:22.714214 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:22 crc kubenswrapper[4833]: I0217 13:46:22.714229 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:22 crc kubenswrapper[4833]: I0217 13:46:22.714241 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:22Z","lastTransitionTime":"2026-02-17T13:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:22 crc kubenswrapper[4833]: I0217 13:46:22.816746 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:22 crc kubenswrapper[4833]: I0217 13:46:22.816788 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:22 crc kubenswrapper[4833]: I0217 13:46:22.816797 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:22 crc kubenswrapper[4833]: I0217 13:46:22.816811 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:22 crc kubenswrapper[4833]: I0217 13:46:22.816822 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:22Z","lastTransitionTime":"2026-02-17T13:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:22 crc kubenswrapper[4833]: I0217 13:46:22.918889 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:22 crc kubenswrapper[4833]: I0217 13:46:22.918954 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:22 crc kubenswrapper[4833]: I0217 13:46:22.918967 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:22 crc kubenswrapper[4833]: I0217 13:46:22.918985 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:22 crc kubenswrapper[4833]: I0217 13:46:22.918998 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:22Z","lastTransitionTime":"2026-02-17T13:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:23 crc kubenswrapper[4833]: I0217 13:46:23.020859 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:23 crc kubenswrapper[4833]: I0217 13:46:23.020889 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:23 crc kubenswrapper[4833]: I0217 13:46:23.020898 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:23 crc kubenswrapper[4833]: I0217 13:46:23.020910 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:23 crc kubenswrapper[4833]: I0217 13:46:23.020919 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:23Z","lastTransitionTime":"2026-02-17T13:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:23 crc kubenswrapper[4833]: I0217 13:46:23.032494 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 22:37:18.658167943 +0000 UTC Feb 17 13:46:23 crc kubenswrapper[4833]: I0217 13:46:23.042325 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:46:23 crc kubenswrapper[4833]: E0217 13:46:23.042564 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:46:23 crc kubenswrapper[4833]: I0217 13:46:23.123298 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:23 crc kubenswrapper[4833]: I0217 13:46:23.123350 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:23 crc kubenswrapper[4833]: I0217 13:46:23.123363 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:23 crc kubenswrapper[4833]: I0217 13:46:23.123387 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:23 crc kubenswrapper[4833]: I0217 13:46:23.123401 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:23Z","lastTransitionTime":"2026-02-17T13:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:23 crc kubenswrapper[4833]: I0217 13:46:23.227657 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:23 crc kubenswrapper[4833]: I0217 13:46:23.227706 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:23 crc kubenswrapper[4833]: I0217 13:46:23.227724 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:23 crc kubenswrapper[4833]: I0217 13:46:23.227749 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:23 crc kubenswrapper[4833]: I0217 13:46:23.227770 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:23Z","lastTransitionTime":"2026-02-17T13:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:23 crc kubenswrapper[4833]: I0217 13:46:23.330105 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:23 crc kubenswrapper[4833]: I0217 13:46:23.330157 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:23 crc kubenswrapper[4833]: I0217 13:46:23.330173 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:23 crc kubenswrapper[4833]: I0217 13:46:23.330196 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:23 crc kubenswrapper[4833]: I0217 13:46:23.330214 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:23Z","lastTransitionTime":"2026-02-17T13:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:23 crc kubenswrapper[4833]: I0217 13:46:23.454267 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:23 crc kubenswrapper[4833]: I0217 13:46:23.454311 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:23 crc kubenswrapper[4833]: I0217 13:46:23.454325 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:23 crc kubenswrapper[4833]: I0217 13:46:23.454344 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:23 crc kubenswrapper[4833]: I0217 13:46:23.454357 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:23Z","lastTransitionTime":"2026-02-17T13:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:23 crc kubenswrapper[4833]: I0217 13:46:23.556978 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:23 crc kubenswrapper[4833]: I0217 13:46:23.557099 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:23 crc kubenswrapper[4833]: I0217 13:46:23.557119 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:23 crc kubenswrapper[4833]: I0217 13:46:23.557150 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:23 crc kubenswrapper[4833]: I0217 13:46:23.557168 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:23Z","lastTransitionTime":"2026-02-17T13:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:23 crc kubenswrapper[4833]: I0217 13:46:23.660441 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:23 crc kubenswrapper[4833]: I0217 13:46:23.660871 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:23 crc kubenswrapper[4833]: I0217 13:46:23.661086 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:23 crc kubenswrapper[4833]: I0217 13:46:23.661308 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:23 crc kubenswrapper[4833]: I0217 13:46:23.661439 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:23Z","lastTransitionTime":"2026-02-17T13:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:23 crc kubenswrapper[4833]: I0217 13:46:23.768704 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:23 crc kubenswrapper[4833]: I0217 13:46:23.768759 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:23 crc kubenswrapper[4833]: I0217 13:46:23.768778 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:23 crc kubenswrapper[4833]: I0217 13:46:23.768801 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:23 crc kubenswrapper[4833]: I0217 13:46:23.768818 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:23Z","lastTransitionTime":"2026-02-17T13:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:23 crc kubenswrapper[4833]: I0217 13:46:23.871691 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:23 crc kubenswrapper[4833]: I0217 13:46:23.871742 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:23 crc kubenswrapper[4833]: I0217 13:46:23.871754 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:23 crc kubenswrapper[4833]: I0217 13:46:23.871773 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:23 crc kubenswrapper[4833]: I0217 13:46:23.871784 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:23Z","lastTransitionTime":"2026-02-17T13:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:23 crc kubenswrapper[4833]: I0217 13:46:23.975785 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:23 crc kubenswrapper[4833]: I0217 13:46:23.975861 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:23 crc kubenswrapper[4833]: I0217 13:46:23.975871 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:23 crc kubenswrapper[4833]: I0217 13:46:23.975892 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:23 crc kubenswrapper[4833]: I0217 13:46:23.975905 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:23Z","lastTransitionTime":"2026-02-17T13:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:24 crc kubenswrapper[4833]: I0217 13:46:24.033132 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 21:35:42.135042985 +0000 UTC Feb 17 13:46:24 crc kubenswrapper[4833]: I0217 13:46:24.041452 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:46:24 crc kubenswrapper[4833]: I0217 13:46:24.041453 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:46:24 crc kubenswrapper[4833]: I0217 13:46:24.041585 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b7xf" Feb 17 13:46:24 crc kubenswrapper[4833]: E0217 13:46:24.041902 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:46:24 crc kubenswrapper[4833]: E0217 13:46:24.042106 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:46:24 crc kubenswrapper[4833]: E0217 13:46:24.042161 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b7xf" podUID="892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c" Feb 17 13:46:24 crc kubenswrapper[4833]: I0217 13:46:24.079564 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:24 crc kubenswrapper[4833]: I0217 13:46:24.079607 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:24 crc kubenswrapper[4833]: I0217 13:46:24.079620 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:24 crc kubenswrapper[4833]: I0217 13:46:24.079640 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:24 crc kubenswrapper[4833]: I0217 13:46:24.079656 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:24Z","lastTransitionTime":"2026-02-17T13:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:24 crc kubenswrapper[4833]: I0217 13:46:24.182962 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:24 crc kubenswrapper[4833]: I0217 13:46:24.183072 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:24 crc kubenswrapper[4833]: I0217 13:46:24.183100 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:24 crc kubenswrapper[4833]: I0217 13:46:24.183132 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:24 crc kubenswrapper[4833]: I0217 13:46:24.183155 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:24Z","lastTransitionTime":"2026-02-17T13:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:24 crc kubenswrapper[4833]: I0217 13:46:24.286433 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:24 crc kubenswrapper[4833]: I0217 13:46:24.286527 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:24 crc kubenswrapper[4833]: I0217 13:46:24.286547 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:24 crc kubenswrapper[4833]: I0217 13:46:24.286570 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:24 crc kubenswrapper[4833]: I0217 13:46:24.286589 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:24Z","lastTransitionTime":"2026-02-17T13:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:24 crc kubenswrapper[4833]: I0217 13:46:24.388951 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:24 crc kubenswrapper[4833]: I0217 13:46:24.389011 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:24 crc kubenswrapper[4833]: I0217 13:46:24.389030 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:24 crc kubenswrapper[4833]: I0217 13:46:24.389091 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:24 crc kubenswrapper[4833]: I0217 13:46:24.389110 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:24Z","lastTransitionTime":"2026-02-17T13:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:24 crc kubenswrapper[4833]: I0217 13:46:24.491435 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:24 crc kubenswrapper[4833]: I0217 13:46:24.491514 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:24 crc kubenswrapper[4833]: I0217 13:46:24.491539 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:24 crc kubenswrapper[4833]: I0217 13:46:24.491570 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:24 crc kubenswrapper[4833]: I0217 13:46:24.491596 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:24Z","lastTransitionTime":"2026-02-17T13:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:24 crc kubenswrapper[4833]: I0217 13:46:24.594673 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:24 crc kubenswrapper[4833]: I0217 13:46:24.594718 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:24 crc kubenswrapper[4833]: I0217 13:46:24.594728 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:24 crc kubenswrapper[4833]: I0217 13:46:24.594742 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:24 crc kubenswrapper[4833]: I0217 13:46:24.594753 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:24Z","lastTransitionTime":"2026-02-17T13:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:24 crc kubenswrapper[4833]: I0217 13:46:24.698637 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:24 crc kubenswrapper[4833]: I0217 13:46:24.698696 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:24 crc kubenswrapper[4833]: I0217 13:46:24.698713 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:24 crc kubenswrapper[4833]: I0217 13:46:24.698764 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:24 crc kubenswrapper[4833]: I0217 13:46:24.698781 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:24Z","lastTransitionTime":"2026-02-17T13:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:24 crc kubenswrapper[4833]: I0217 13:46:24.801547 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:24 crc kubenswrapper[4833]: I0217 13:46:24.801591 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:24 crc kubenswrapper[4833]: I0217 13:46:24.801616 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:24 crc kubenswrapper[4833]: I0217 13:46:24.801630 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:24 crc kubenswrapper[4833]: I0217 13:46:24.801643 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:24Z","lastTransitionTime":"2026-02-17T13:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:24 crc kubenswrapper[4833]: I0217 13:46:24.904899 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:24 crc kubenswrapper[4833]: I0217 13:46:24.904972 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:24 crc kubenswrapper[4833]: I0217 13:46:24.905009 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:24 crc kubenswrapper[4833]: I0217 13:46:24.905080 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:24 crc kubenswrapper[4833]: I0217 13:46:24.905102 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:24Z","lastTransitionTime":"2026-02-17T13:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:25 crc kubenswrapper[4833]: I0217 13:46:25.007957 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:25 crc kubenswrapper[4833]: I0217 13:46:25.008017 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:25 crc kubenswrapper[4833]: I0217 13:46:25.008068 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:25 crc kubenswrapper[4833]: I0217 13:46:25.008102 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:25 crc kubenswrapper[4833]: I0217 13:46:25.008127 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:25Z","lastTransitionTime":"2026-02-17T13:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:25 crc kubenswrapper[4833]: I0217 13:46:25.034252 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 05:58:53.513136138 +0000 UTC Feb 17 13:46:25 crc kubenswrapper[4833]: I0217 13:46:25.040632 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:46:25 crc kubenswrapper[4833]: E0217 13:46:25.040817 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:46:25 crc kubenswrapper[4833]: I0217 13:46:25.110885 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:25 crc kubenswrapper[4833]: I0217 13:46:25.110958 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:25 crc kubenswrapper[4833]: I0217 13:46:25.110971 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:25 crc kubenswrapper[4833]: I0217 13:46:25.110995 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:25 crc kubenswrapper[4833]: I0217 13:46:25.111011 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:25Z","lastTransitionTime":"2026-02-17T13:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:25 crc kubenswrapper[4833]: I0217 13:46:25.214290 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:25 crc kubenswrapper[4833]: I0217 13:46:25.214601 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:25 crc kubenswrapper[4833]: I0217 13:46:25.214673 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:25 crc kubenswrapper[4833]: I0217 13:46:25.214769 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:25 crc kubenswrapper[4833]: I0217 13:46:25.214845 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:25Z","lastTransitionTime":"2026-02-17T13:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:25 crc kubenswrapper[4833]: I0217 13:46:25.318089 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:25 crc kubenswrapper[4833]: I0217 13:46:25.318333 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:25 crc kubenswrapper[4833]: I0217 13:46:25.318433 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:25 crc kubenswrapper[4833]: I0217 13:46:25.318572 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:25 crc kubenswrapper[4833]: I0217 13:46:25.318800 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:25Z","lastTransitionTime":"2026-02-17T13:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:25 crc kubenswrapper[4833]: I0217 13:46:25.421151 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:25 crc kubenswrapper[4833]: I0217 13:46:25.421214 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:25 crc kubenswrapper[4833]: I0217 13:46:25.421235 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:25 crc kubenswrapper[4833]: I0217 13:46:25.421260 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:25 crc kubenswrapper[4833]: I0217 13:46:25.421278 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:25Z","lastTransitionTime":"2026-02-17T13:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:25 crc kubenswrapper[4833]: I0217 13:46:25.522919 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:25 crc kubenswrapper[4833]: I0217 13:46:25.523026 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:25 crc kubenswrapper[4833]: I0217 13:46:25.523069 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:25 crc kubenswrapper[4833]: I0217 13:46:25.523094 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:25 crc kubenswrapper[4833]: I0217 13:46:25.523110 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:25Z","lastTransitionTime":"2026-02-17T13:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:25 crc kubenswrapper[4833]: I0217 13:46:25.625927 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:25 crc kubenswrapper[4833]: I0217 13:46:25.625980 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:25 crc kubenswrapper[4833]: I0217 13:46:25.625997 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:25 crc kubenswrapper[4833]: I0217 13:46:25.626019 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:25 crc kubenswrapper[4833]: I0217 13:46:25.626064 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:25Z","lastTransitionTime":"2026-02-17T13:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:25 crc kubenswrapper[4833]: I0217 13:46:25.728256 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:25 crc kubenswrapper[4833]: I0217 13:46:25.728290 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:25 crc kubenswrapper[4833]: I0217 13:46:25.728299 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:25 crc kubenswrapper[4833]: I0217 13:46:25.728312 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:25 crc kubenswrapper[4833]: I0217 13:46:25.728321 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:25Z","lastTransitionTime":"2026-02-17T13:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:25 crc kubenswrapper[4833]: I0217 13:46:25.831392 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:25 crc kubenswrapper[4833]: I0217 13:46:25.831430 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:25 crc kubenswrapper[4833]: I0217 13:46:25.831439 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:25 crc kubenswrapper[4833]: I0217 13:46:25.831454 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:25 crc kubenswrapper[4833]: I0217 13:46:25.831464 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:25Z","lastTransitionTime":"2026-02-17T13:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:25 crc kubenswrapper[4833]: I0217 13:46:25.933576 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:25 crc kubenswrapper[4833]: I0217 13:46:25.933649 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:25 crc kubenswrapper[4833]: I0217 13:46:25.933674 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:25 crc kubenswrapper[4833]: I0217 13:46:25.933705 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:25 crc kubenswrapper[4833]: I0217 13:46:25.933727 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:25Z","lastTransitionTime":"2026-02-17T13:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:26 crc kubenswrapper[4833]: I0217 13:46:26.034697 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 07:38:20.825062903 +0000 UTC Feb 17 13:46:26 crc kubenswrapper[4833]: I0217 13:46:26.036517 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:26 crc kubenswrapper[4833]: I0217 13:46:26.036560 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:26 crc kubenswrapper[4833]: I0217 13:46:26.036575 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:26 crc kubenswrapper[4833]: I0217 13:46:26.036596 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:26 crc kubenswrapper[4833]: I0217 13:46:26.036642 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:26Z","lastTransitionTime":"2026-02-17T13:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:26 crc kubenswrapper[4833]: I0217 13:46:26.041705 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:46:26 crc kubenswrapper[4833]: E0217 13:46:26.041943 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:46:26 crc kubenswrapper[4833]: I0217 13:46:26.042033 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b7xf" Feb 17 13:46:26 crc kubenswrapper[4833]: E0217 13:46:26.042181 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b7xf" podUID="892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c" Feb 17 13:46:26 crc kubenswrapper[4833]: I0217 13:46:26.042465 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:46:26 crc kubenswrapper[4833]: E0217 13:46:26.042970 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:46:26 crc kubenswrapper[4833]: I0217 13:46:26.043231 4833 scope.go:117] "RemoveContainer" containerID="79b340d04500cf506784f0354e892c3ea66b73dcfd1d22b17e464cc29175b9e8" Feb 17 13:46:26 crc kubenswrapper[4833]: I0217 13:46:26.140349 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:26 crc kubenswrapper[4833]: I0217 13:46:26.140452 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:26 crc kubenswrapper[4833]: I0217 13:46:26.140518 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:26 crc kubenswrapper[4833]: I0217 13:46:26.140553 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:26 crc kubenswrapper[4833]: I0217 13:46:26.140621 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:26Z","lastTransitionTime":"2026-02-17T13:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:26 crc kubenswrapper[4833]: I0217 13:46:26.265416 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:26 crc kubenswrapper[4833]: I0217 13:46:26.265786 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:26 crc kubenswrapper[4833]: I0217 13:46:26.265799 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:26 crc kubenswrapper[4833]: I0217 13:46:26.265818 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:26 crc kubenswrapper[4833]: I0217 13:46:26.265830 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:26Z","lastTransitionTime":"2026-02-17T13:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:26 crc kubenswrapper[4833]: I0217 13:46:26.368511 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:26 crc kubenswrapper[4833]: I0217 13:46:26.368558 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:26 crc kubenswrapper[4833]: I0217 13:46:26.368575 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:26 crc kubenswrapper[4833]: I0217 13:46:26.368597 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:26 crc kubenswrapper[4833]: I0217 13:46:26.368614 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:26Z","lastTransitionTime":"2026-02-17T13:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:26 crc kubenswrapper[4833]: I0217 13:46:26.470707 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7r9gt_72c5918a-056f-446c-b138-a1be7140a5b0/ovnkube-controller/2.log" Feb 17 13:46:26 crc kubenswrapper[4833]: I0217 13:46:26.473328 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:26 crc kubenswrapper[4833]: I0217 13:46:26.473383 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:26 crc kubenswrapper[4833]: I0217 13:46:26.473400 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:26 crc kubenswrapper[4833]: I0217 13:46:26.473425 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:26 crc kubenswrapper[4833]: I0217 13:46:26.473442 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:26Z","lastTransitionTime":"2026-02-17T13:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:26 crc kubenswrapper[4833]: I0217 13:46:26.476140 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" event={"ID":"72c5918a-056f-446c-b138-a1be7140a5b0","Type":"ContainerStarted","Data":"b53fbee4db34f53cf5d58fa925e793cae75e34d7aeac4b10518659e2b89e177b"} Feb 17 13:46:26 crc kubenswrapper[4833]: I0217 13:46:26.477327 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" Feb 17 13:46:26 crc kubenswrapper[4833]: I0217 13:46:26.493206 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wlt4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efd76798e54bfcbad6d3a5f07396fe8579adcdb3d5bab3c303a9d31ad242e830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26d2bf39d36b6e8998e442f36f84198a363d7f22f56a05957a4a242459e6ff8b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T13:46:20Z\\\",\\\"message\\\":\\\"2026-02-17T13:45:34+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9e0f4ef6-02a5-40ec-920b-c377ffde18f5\\\\n2026-02-17T13:45:34+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9e0f4ef6-02a5-40ec-920b-c377ffde18f5 to /host/opt/cni/bin/\\\\n2026-02-17T13:45:35Z [verbose] multus-daemon started\\\\n2026-02-17T13:45:35Z [verbose] Readiness Indicator file check\\\\n2026-02-17T13:46:20Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9dxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wlt4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:26Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:26 crc kubenswrapper[4833]: I0217 13:46:26.521668 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72c5918a-056f-446c-b138-a1be7140a5b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0c4e266b444d8ac1a67f9b5a6705f4b871e3930daf7707cd30b589a4cce24ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a89a54d9bedad8afe45f34ce0bc2324d0ca8590c328aad9d1557b2d92e5b81fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5c74f469d2d07cfd265ee1f625b957efc0f9d1f0efd5f483c962abdfd66a080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ee6e033fa9b29f59b186e39ec83d162b40e36aab28bb1e568a54c351a8c0c79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9da7ba3cbbca328926f2372a7e581793373ec876c961f7988617c668d958b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42e357a961b833cb840c8b33ce0db65d57cc5fb37789b00a6bcd3eb67d8e33f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b53fbee4db34f53cf5d58fa925e793cae75e34d7aeac4b10518659e2b89e177b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79b340d04500cf506784f0354e892c3ea66b73dcfd1d22b17e464cc29175b9e8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T13:45:57Z\\\",\\\"message\\\":\\\"*v1.Pod openshift-etcd/etcd-crc\\\\nI0217 13:45:57.058971 6481 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"54fbe873-7e6d-475f-a0ad-8dd5f06d850d\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/cluster-autoscaler-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/cluster-autoscaler-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.245\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.245\\\\\\\", Port:9192, Template:(*services.Template)(nil\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fdc1a29a9f5b3be51beb5d28d7a09f671e4f9effeb9ed122da196da8623abe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7r9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:26Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:26 crc kubenswrapper[4833]: I0217 13:46:26.542967 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23fd4be9-debc-405d-ac05-5a5160593231\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f18bb5a144ef41de921b7b92ee560b76e3f9f9a626be4d5638db46e4688d256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84373535efe637bbd5cf8649523941f0e73ad51f06dd45c913741e2a8e7b775f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090ad37f43c1a891b1a2922b2a367e1b52ce56fd6a53e9e749f819bf281136a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://436df1e343e5ca9347dd98b9da03c74816b0bc0ec6cecd94dc3b21c38eea9b51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babf66dd8ad93003e9766e885c7626d4f6dab5bc46a6d714449a01065d7afea9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"message\\\":\\\"W0217 13:45:14.176886 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 13:45:14.177326 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771335914 cert, and key in /tmp/serving-cert-2529326345/serving-signer.crt, /tmp/serving-cert-2529326345/serving-signer.key\\\\nI0217 13:45:14.372413 1 observer_polling.go:159] Starting file observer\\\\nW0217 13:45:14.378661 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 13:45:14.378888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:45:14.382093 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529326345/tls.crt::/tmp/serving-cert-2529326345/tls.key\\\\\\\"\\\\nF0217 13:45:14.604297 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0450fd6122e12877e07084f693f67e63ac70bcecc962f7da5a32241ba14553\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:26Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:26 crc kubenswrapper[4833]: I0217 13:46:26.565686 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e08f74a-75d7-4dce-9297-27140e91962a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a2013b70df34194f895ea7c2e5f2650966fa2d06518733fa4411893e5b2b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbc961a623ca1f4237f238856a41acfb724aadd5fa3feda70410c352b6f750c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80bdd44d3b326c9bb61f21b221c63d4647e2cc6edc95fe568b90e76f4329cf1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9bc54d1057546b9c8d7d1527bb2ae93c1c713b4d6e5879477157af90fc0d89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e88eaef317b2beb0c55811634e6752439b7c7725072f55a42f5ab49c2ac2385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:26Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:26 crc kubenswrapper[4833]: I0217 13:46:26.575213 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:26 crc kubenswrapper[4833]: I0217 13:46:26.575253 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:26 crc kubenswrapper[4833]: I0217 13:46:26.575266 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:26 crc kubenswrapper[4833]: I0217 13:46:26.575284 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:26 crc kubenswrapper[4833]: I0217 13:46:26.575298 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:26Z","lastTransitionTime":"2026-02-17T13:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:26 crc kubenswrapper[4833]: I0217 13:46:26.587922 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5991f03-6fb7-4425-b041-26d760e380ae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae2cc390253214c6979cba64584e59e0342c1f750b16c569acfd885cb6b36c31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9702bc05160617e0c8ac8fd3d9a81244b5be2bf955ef58f9408fc0b42bea6609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5529e4fe84d20c564ff55b47e32df67ca6aac40d2629a6b5bf96e03a64b79676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e89d83530ec38b869e22ff9125ce8e680c2a274e471aedd447c9daee233acb1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e89d83530ec38b869e22ff9125ce8e680c2a274e471aedd447c9daee233acb1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:26Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:26 crc kubenswrapper[4833]: I0217 13:46:26.608170 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:26Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:26 crc kubenswrapper[4833]: I0217 13:46:26.617709 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5gxjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03c9f579-21ad-4977-a5e8-db9272a08557\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7400b87694dd74f41819e23ce3f617021e028cdcbfdad65b2213249f35d176d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s6vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5gxjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:26Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:26 crc kubenswrapper[4833]: I0217 13:46:26.627694 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xwfsh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"962a90ec-217a-4df0-8d83-2e6663953088\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cf73503bb300f6f71385f8da17c04a4b5d84691d5d7ca3b363b2335e4905c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srrv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc181ab232df3bc23da46eabfb43d64f7c4e674152c27338d8ff5faf1c477bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srrv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xwfsh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:26Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:26 crc kubenswrapper[4833]: I0217 13:46:26.637066 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4b7xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft275\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft275\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4b7xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:26Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:26 crc kubenswrapper[4833]: I0217 13:46:26.646069 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vx9xx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d13dbe6c-a57b-4011-9987-193ccf4939f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6637ecd7299bd61ba9b47f093048b122c90fc98990eb22155e8467c1dfd75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w7kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vx9xx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:26Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:26 crc kubenswrapper[4833]: I0217 13:46:26.657114 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4a1ca83-1919-4f9c-82de-c849cbd50e70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4ea9f4f4051db850a31f5e1081095664c8764e1033e81823e75cd0072d13a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nq5ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89a1061eccd396579d2995d2d4ee3bd83f65f7e06a951b4d77897e687a7dcb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nq5ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nmzvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:26Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:26 crc kubenswrapper[4833]: I0217 13:46:26.671400 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wxvlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eb74ba2-a87a-415f-8978-8ef706346aa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f08f2aa3f0028f17b13b73c760f1471c3f335f3ae02be305bddd63552ec7330b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2569baf4273e04fbca0b38e0a65c945ba4b2767cba6f2b6647012328c041bab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2569baf4273e04fbca0b38e0a65c945ba4b2767cba6f2b6647012328c041bab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec44770454d862d13715225217505d12ffefd3f5a8dae0912a7ba8a9fbccb0bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec44770454d862d13715225217505d12ffefd3f5a8dae0912a7ba8a9fbccb0bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d28180d5fc08486a1492239ab1107c9b3856334091da9d2ad7ae0e6086fbd00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d28180d5fc08486a1492239ab1107c9b3856334091da9d2ad7ae0e6086fbd00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79641e6aa81b861f7dd3dadd0cc23f677727909d85f017c761bcbbb19450b5e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79641e6aa81b861f7dd3dadd0cc23f677727909d85f017c761bcbbb19450b5e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f4ea02b7d74057eddb0946756236afdb129b5ad326b153d66357ee42caca78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f4ea02b7d74057eddb0946756236afdb129b5ad326b153d66357ee42caca78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcd939d8a8deab6a94b6df237bd627850f5bb37bd29bd0988d16452dcbd87c9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcd939d8a8deab6a94b6df237bd627850f5bb37bd29bd0988d16452dcbd87c9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wxvlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:26Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:26 crc kubenswrapper[4833]: I0217 13:46:26.677366 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:26 crc kubenswrapper[4833]: I0217 13:46:26.677401 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:26 crc kubenswrapper[4833]: I0217 13:46:26.677409 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:26 crc kubenswrapper[4833]: I0217 13:46:26.677424 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:26 crc kubenswrapper[4833]: I0217 13:46:26.677434 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:26Z","lastTransitionTime":"2026-02-17T13:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:26 crc kubenswrapper[4833]: I0217 13:46:26.683660 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c1afc7-6071-464b-aedc-11234d869afe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e36c4174a69c22c608db515a3fe558ec0292c53ae56a43a945c8bb19bf4cdf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bb5bc7e6f182dd6271a6d16d8441e1f6833a32fd03858a3b91d16a896339c2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e477bf1f25d554946b3657a5bd18f922b18483902fd4f9c05a452a299cb4d920\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2260fe18b8f9f829f7dde85c9bdfc32895c8bfe48e0986c8f8205708e078ff37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:26Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:26 crc kubenswrapper[4833]: I0217 13:46:26.694391 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a207ffb6a17f606fde32eec5ae4222e2fdf02d79f61e4f959be55814c14daf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:26Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:26 crc kubenswrapper[4833]: I0217 13:46:26.704648 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b9b39aa9ac4f718b75bfeffba8e09bf2fefca68625037f565f3b237b6275db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:26Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:26 crc kubenswrapper[4833]: I0217 13:46:26.719702 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:26Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:26 crc kubenswrapper[4833]: I0217 13:46:26.733265 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e5a2da13ff704ed5e89100bd3a88a6349d446c2af48f88acb399efb43dc42d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac53462f7a94ff04dc94b5065169da7ec43985a5a7e7dba779f23614b61b78fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:26Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:26 crc kubenswrapper[4833]: I0217 13:46:26.744091 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:26Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:26 crc kubenswrapper[4833]: I0217 13:46:26.780276 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:26 crc kubenswrapper[4833]: I0217 13:46:26.780308 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:26 crc kubenswrapper[4833]: I0217 13:46:26.780318 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:26 crc kubenswrapper[4833]: I0217 13:46:26.780334 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:26 crc kubenswrapper[4833]: I0217 13:46:26.780347 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:26Z","lastTransitionTime":"2026-02-17T13:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:26 crc kubenswrapper[4833]: I0217 13:46:26.882531 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:26 crc kubenswrapper[4833]: I0217 13:46:26.882603 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:26 crc kubenswrapper[4833]: I0217 13:46:26.882614 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:26 crc kubenswrapper[4833]: I0217 13:46:26.882630 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:26 crc kubenswrapper[4833]: I0217 13:46:26.882640 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:26Z","lastTransitionTime":"2026-02-17T13:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:26 crc kubenswrapper[4833]: I0217 13:46:26.985169 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:26 crc kubenswrapper[4833]: I0217 13:46:26.985202 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:26 crc kubenswrapper[4833]: I0217 13:46:26.985210 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:26 crc kubenswrapper[4833]: I0217 13:46:26.985224 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:26 crc kubenswrapper[4833]: I0217 13:46:26.985233 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:26Z","lastTransitionTime":"2026-02-17T13:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.035523 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 22:22:17.106417245 +0000 UTC Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.040907 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:46:27 crc kubenswrapper[4833]: E0217 13:46:27.041103 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.087689 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.087751 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.087771 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.087797 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.087816 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:27Z","lastTransitionTime":"2026-02-17T13:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.190913 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.190963 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.190979 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.191003 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.191020 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:27Z","lastTransitionTime":"2026-02-17T13:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.293993 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.294132 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.294196 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.294227 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.294287 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:27Z","lastTransitionTime":"2026-02-17T13:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.397307 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.397350 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.397367 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.397392 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.397408 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:27Z","lastTransitionTime":"2026-02-17T13:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.482533 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7r9gt_72c5918a-056f-446c-b138-a1be7140a5b0/ovnkube-controller/3.log" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.483960 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7r9gt_72c5918a-056f-446c-b138-a1be7140a5b0/ovnkube-controller/2.log" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.488432 4833 generic.go:334] "Generic (PLEG): container finished" podID="72c5918a-056f-446c-b138-a1be7140a5b0" containerID="b53fbee4db34f53cf5d58fa925e793cae75e34d7aeac4b10518659e2b89e177b" exitCode=1 Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.488483 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" event={"ID":"72c5918a-056f-446c-b138-a1be7140a5b0","Type":"ContainerDied","Data":"b53fbee4db34f53cf5d58fa925e793cae75e34d7aeac4b10518659e2b89e177b"} Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.488531 4833 scope.go:117] "RemoveContainer" containerID="79b340d04500cf506784f0354e892c3ea66b73dcfd1d22b17e464cc29175b9e8" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.489436 4833 scope.go:117] "RemoveContainer" containerID="b53fbee4db34f53cf5d58fa925e793cae75e34d7aeac4b10518659e2b89e177b" Feb 17 13:46:27 crc kubenswrapper[4833]: E0217 13:46:27.489685 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-7r9gt_openshift-ovn-kubernetes(72c5918a-056f-446c-b138-a1be7140a5b0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" podUID="72c5918a-056f-446c-b138-a1be7140a5b0" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.501006 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.501080 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.501097 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.501120 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.501137 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:27Z","lastTransitionTime":"2026-02-17T13:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.529662 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e08f74a-75d7-4dce-9297-27140e91962a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a2013b70df34194f895ea7c2e5f2650966fa2d06518733fa4411893e5b2b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbc961a623ca1f4237f238856a41acfb724aadd5fa3feda70410c352b6f750c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80bdd44d3b326c9bb61f21b221c63d4647e2cc6edc95fe568b90e76f4329cf1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9bc54d1057546b9c8d7d1527bb2ae93c1c713b4d6e5879477157af90fc0d89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e88eaef317b2beb0c55811634e6752439b7c7725072f55a42f5ab49c2ac2385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:27Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.548918 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5991f03-6fb7-4425-b041-26d760e380ae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae2cc390253214c6979cba64584e59e0342c1f750b16c569acfd885cb6b36c31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9702bc05160617e0c8ac8fd3d9a81244b5be2bf955ef58f9408fc0b42bea6609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5529e4fe84d20c564ff55b47e32df67ca6aac40d2629a6b5bf96e03a64b79676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e89d83530ec38b869e22ff9125ce8e680c2a274e471aedd447c9daee233acb1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e89d83530ec38b869e22ff9125ce8e680c2a274e471aedd447c9daee233acb1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:27Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.565350 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:27Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.581528 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wlt4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efd76798e54bfcbad6d3a5f07396fe8579adcdb3d5bab3c303a9d31ad242e830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26d2bf39d36b6e8998e442f36f84198a363d7f22f56a05957a4a242459e6ff8b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T13:46:20Z\\\",\\\"message\\\":\\\"2026-02-17T13:45:34+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9e0f4ef6-02a5-40ec-920b-c377ffde18f5\\\\n2026-02-17T13:45:34+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9e0f4ef6-02a5-40ec-920b-c377ffde18f5 to /host/opt/cni/bin/\\\\n2026-02-17T13:45:35Z [verbose] multus-daemon started\\\\n2026-02-17T13:45:35Z [verbose] Readiness Indicator file check\\\\n2026-02-17T13:46:20Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9dxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wlt4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:27Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.597822 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72c5918a-056f-446c-b138-a1be7140a5b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0c4e266b444d8ac1a67f9b5a6705f4b871e3930daf7707cd30b589a4cce24ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a89a54d9bedad8afe45f34ce0bc2324d0ca8590c328aad9d1557b2d92e5b81fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5c74f469d2d07cfd265ee1f625b957efc0f9d1f0efd5f483c962abdfd66a080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ee6e033fa9b29f59b186e39ec83d162b40e36aab28bb1e568a54c351a8c0c79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9da7ba3cbbca328926f2372a7e581793373ec876c961f7988617c668d958b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42e357a961b833cb840c8b33ce0db65d57cc5fb37789b00a6bcd3eb67d8e33f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b53fbee4db34f53cf5d58fa925e793cae75e34d7aeac4b10518659e2b89e177b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79b340d04500cf506784f0354e892c3ea66b73dcfd1d22b17e464cc29175b9e8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T13:45:57Z\\\",\\\"message\\\":\\\"*v1.Pod openshift-etcd/etcd-crc\\\\nI0217 13:45:57.058971 6481 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"54fbe873-7e6d-475f-a0ad-8dd5f06d850d\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/cluster-autoscaler-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/cluster-autoscaler-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.245\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.245\\\\\\\", Port:9192, Template:(*services.Template)(nil\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b53fbee4db34f53cf5d58fa925e793cae75e34d7aeac4b10518659e2b89e177b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T13:46:26Z\\\",\\\"message\\\":\\\"e to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:26Z is after 2025-08-24T17:21:41Z]\\\\nI0217 13:46:26.957178 6879 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-vx9xx\\\\nI0217 13:46:26.957185 6879 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-vx9xx\\\\nI0217 13:46:26.957192 6879 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-vx9xx in node crc\\\\nI0217 13:46:26.957197 6879 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-vx9xx after 0 failed attempt(s)\\\\nI0217 13:46:26.957203 6879 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-vx9xx\\\\nI0217 13:46:26.957215 6879 obj_retry.go:303] Retry object setup: *\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fdc1a29a9f5b3be51beb5d28d7a09f671e4f9effeb9ed122da196da8623abe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7r9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:27Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.603228 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.603248 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.603257 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.603269 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.603292 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:27Z","lastTransitionTime":"2026-02-17T13:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.613646 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23fd4be9-debc-405d-ac05-5a5160593231\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f18bb5a144ef41de921b7b92ee560b76e3f9f9a626be4d5638db46e4688d256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84373535efe637bbd5cf8649523941f0e73ad51f06dd45c913741e2a8e7b775f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090ad37f43c1a891b1a2922b2a367e1b52ce56fd6a53e9e749f819bf281136a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://436df1e343e5ca9347dd98b9da03c74816b0bc0ec6cecd94dc3b21c38eea9b51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babf66dd8ad93003e9766e885c7626d4f6dab5bc46a6d714449a01065d7afea9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"message\\\":\\\"W0217 13:45:14.176886 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 13:45:14.177326 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771335914 cert, and key in /tmp/serving-cert-2529326345/serving-signer.crt, /tmp/serving-cert-2529326345/serving-signer.key\\\\nI0217 13:45:14.372413 1 observer_polling.go:159] Starting file observer\\\\nW0217 13:45:14.378661 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 13:45:14.378888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:45:14.382093 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529326345/tls.crt::/tmp/serving-cert-2529326345/tls.key\\\\\\\"\\\\nF0217 13:45:14.604297 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0450fd6122e12877e07084f693f67e63ac70bcecc962f7da5a32241ba14553\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:27Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.625781 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xwfsh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"962a90ec-217a-4df0-8d83-2e6663953088\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cf73503bb300f6f71385f8da17c04a4b5d84691d5d7ca3b363b2335e4905c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srrv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc181ab232df3bc23da46eabfb43d64f7c4e674152c27338d8ff5faf1c477bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srrv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xwfsh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:27Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.635886 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4b7xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft275\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft275\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4b7xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:27Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.648303 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5gxjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03c9f579-21ad-4977-a5e8-db9272a08557\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7400b87694dd74f41819e23ce3f617021e028cdcbfdad65b2213249f35d176d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s6vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5gxjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:27Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.663354 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c1afc7-6071-464b-aedc-11234d869afe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e36c4174a69c22c608db515a3fe558ec0292c53ae56a43a945c8bb19bf4cdf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bb5bc7e6f182dd6271a6d16d8441e1f6833a32fd03858a3b91d16a896339c2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e477bf1f25d554946b3657a5bd18f922b18483902fd4f9c05a452a299cb4d920\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2260fe18b8f9f829f7dde85c9bdfc32895c8bfe48e0986c8f8205708e078ff37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:27Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.684111 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a207ffb6a17f606fde32eec5ae4222e2fdf02d79f61e4f959be55814c14daf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:27Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.697845 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b9b39aa9ac4f718b75bfeffba8e09bf2fefca68625037f565f3b237b6275db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:27Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.705410 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.705438 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.705445 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.705459 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.705469 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:27Z","lastTransitionTime":"2026-02-17T13:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.708621 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vx9xx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d13dbe6c-a57b-4011-9987-193ccf4939f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6637ecd7299bd61ba9b47f093048b122c90fc98990eb22155e8467c1dfd75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w7kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vx9xx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:27Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.720076 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4a1ca83-1919-4f9c-82de-c849cbd50e70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4ea9f4f4051db850a31f5e1081095664c8764e1033e81823e75cd0072d13a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nq5ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89a1061eccd396579d2995d2d4ee3bd83f65f7e06a951b4d77897e687a7dcb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nq5ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nmzvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:27Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.733408 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wxvlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eb74ba2-a87a-415f-8978-8ef706346aa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f08f2aa3f0028f17b13b73c760f1471c3f335f3ae02be305bddd63552ec7330b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2569baf4273e04fbca0b38e0a65c945ba4b2767cba6f2b6647012328c041bab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2569baf4273e04fbca0b38e0a65c945ba4b2767cba6f2b6647012328c041bab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec44770454d862d13715225217505d12ffefd3f5a8dae0912a7ba8a9fbccb0bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec44770454d862d13715225217505d12ffefd3f5a8dae0912a7ba8a9fbccb0bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d28180d5fc08486a1492239ab1107c9b3856334091da9d2ad7ae0e6086fbd00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d28180d5fc08486a1492239ab1107c9b3856334091da9d2ad7ae0e6086fbd00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79641e6aa81b861f7dd3dadd0cc23f677727909d85f017c761bcbbb19450b5e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79641e6aa81b861f7dd3dadd0cc23f677727909d85f017c761bcbbb19450b5e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f4ea02b7d74057eddb0946756236afdb129b5ad326b153d66357ee42caca78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f4ea02b7d74057eddb0946756236afdb129b5ad326b153d66357ee42caca78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcd939d8a8deab6a94b6df237bd627850f5bb37bd29bd0988d16452dcbd87c9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcd939d8a8deab6a94b6df237bd627850f5bb37bd29bd0988d16452dcbd87c9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wxvlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:27Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.744808 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e5a2da13ff704ed5e89100bd3a88a6349d446c2af48f88acb399efb43dc42d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac53462f7a94ff04dc94b5065169da7ec43985a5a7e7dba779f23614b61b78fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:27Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.745763 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.745796 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.745809 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.745824 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.745835 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:27Z","lastTransitionTime":"2026-02-17T13:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:27 crc kubenswrapper[4833]: E0217 13:46:27.758580 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"858a93ea-15f0-4bac-8fa3-badb79f68871\\\",\\\"systemUUID\\\":\\\"648b67a2-27e7-447a-8ad2-7acc2e737df4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:27Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.759831 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:27Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.763966 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.764149 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.764243 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.764340 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.764430 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:27Z","lastTransitionTime":"2026-02-17T13:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.777236 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:27Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:27 crc kubenswrapper[4833]: E0217 13:46:27.781812 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"858a93ea-15f0-4bac-8fa3-badb79f68871\\\",\\\"systemUUID\\\":\\\"648b67a2-27e7-447a-8ad2-7acc2e737df4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:27Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.786094 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.786128 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.786140 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.786155 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.786166 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:27Z","lastTransitionTime":"2026-02-17T13:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:27 crc kubenswrapper[4833]: E0217 13:46:27.807181 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"858a93ea-15f0-4bac-8fa3-badb79f68871\\\",\\\"systemUUID\\\":\\\"648b67a2-27e7-447a-8ad2-7acc2e737df4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:27Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.811311 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.811431 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.811525 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.811620 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.811719 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:27Z","lastTransitionTime":"2026-02-17T13:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:27 crc kubenswrapper[4833]: E0217 13:46:27.829213 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"858a93ea-15f0-4bac-8fa3-badb79f68871\\\",\\\"systemUUID\\\":\\\"648b67a2-27e7-447a-8ad2-7acc2e737df4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:27Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.833944 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.834059 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.834089 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.834177 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.834264 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:27Z","lastTransitionTime":"2026-02-17T13:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:27 crc kubenswrapper[4833]: E0217 13:46:27.855664 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"858a93ea-15f0-4bac-8fa3-badb79f68871\\\",\\\"systemUUID\\\":\\\"648b67a2-27e7-447a-8ad2-7acc2e737df4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:27Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:27 crc kubenswrapper[4833]: E0217 13:46:27.856013 4833 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.857721 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.857770 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.857792 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.857819 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.857839 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:27Z","lastTransitionTime":"2026-02-17T13:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.959901 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.959971 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.959993 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.960019 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:27 crc kubenswrapper[4833]: I0217 13:46:27.960070 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:27Z","lastTransitionTime":"2026-02-17T13:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:28 crc kubenswrapper[4833]: I0217 13:46:28.036703 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 21:16:36.529637564 +0000 UTC Feb 17 13:46:28 crc kubenswrapper[4833]: I0217 13:46:28.041204 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:46:28 crc kubenswrapper[4833]: E0217 13:46:28.041430 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:46:28 crc kubenswrapper[4833]: I0217 13:46:28.041270 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b7xf" Feb 17 13:46:28 crc kubenswrapper[4833]: E0217 13:46:28.041871 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b7xf" podUID="892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c" Feb 17 13:46:28 crc kubenswrapper[4833]: I0217 13:46:28.041213 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:46:28 crc kubenswrapper[4833]: E0217 13:46:28.042148 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:46:28 crc kubenswrapper[4833]: I0217 13:46:28.062340 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:28 crc kubenswrapper[4833]: I0217 13:46:28.062463 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:28 crc kubenswrapper[4833]: I0217 13:46:28.062588 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:28 crc kubenswrapper[4833]: I0217 13:46:28.062700 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:28 crc kubenswrapper[4833]: I0217 13:46:28.062802 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:28Z","lastTransitionTime":"2026-02-17T13:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:28 crc kubenswrapper[4833]: I0217 13:46:28.165609 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:28 crc kubenswrapper[4833]: I0217 13:46:28.165677 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:28 crc kubenswrapper[4833]: I0217 13:46:28.165699 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:28 crc kubenswrapper[4833]: I0217 13:46:28.165725 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:28 crc kubenswrapper[4833]: I0217 13:46:28.165747 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:28Z","lastTransitionTime":"2026-02-17T13:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:28 crc kubenswrapper[4833]: I0217 13:46:28.268428 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:28 crc kubenswrapper[4833]: I0217 13:46:28.268691 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:28 crc kubenswrapper[4833]: I0217 13:46:28.268800 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:28 crc kubenswrapper[4833]: I0217 13:46:28.269005 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:28 crc kubenswrapper[4833]: I0217 13:46:28.269240 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:28Z","lastTransitionTime":"2026-02-17T13:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:28 crc kubenswrapper[4833]: I0217 13:46:28.372273 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:28 crc kubenswrapper[4833]: I0217 13:46:28.372657 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:28 crc kubenswrapper[4833]: I0217 13:46:28.372790 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:28 crc kubenswrapper[4833]: I0217 13:46:28.373027 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:28 crc kubenswrapper[4833]: I0217 13:46:28.373186 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:28Z","lastTransitionTime":"2026-02-17T13:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:28 crc kubenswrapper[4833]: I0217 13:46:28.476180 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:28 crc kubenswrapper[4833]: I0217 13:46:28.476219 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:28 crc kubenswrapper[4833]: I0217 13:46:28.476232 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:28 crc kubenswrapper[4833]: I0217 13:46:28.476251 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:28 crc kubenswrapper[4833]: I0217 13:46:28.476266 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:28Z","lastTransitionTime":"2026-02-17T13:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:28 crc kubenswrapper[4833]: I0217 13:46:28.494780 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7r9gt_72c5918a-056f-446c-b138-a1be7140a5b0/ovnkube-controller/3.log" Feb 17 13:46:28 crc kubenswrapper[4833]: I0217 13:46:28.498183 4833 scope.go:117] "RemoveContainer" containerID="b53fbee4db34f53cf5d58fa925e793cae75e34d7aeac4b10518659e2b89e177b" Feb 17 13:46:28 crc kubenswrapper[4833]: E0217 13:46:28.498380 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-7r9gt_openshift-ovn-kubernetes(72c5918a-056f-446c-b138-a1be7140a5b0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" podUID="72c5918a-056f-446c-b138-a1be7140a5b0" Feb 17 13:46:28 crc kubenswrapper[4833]: I0217 13:46:28.514269 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c1afc7-6071-464b-aedc-11234d869afe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e36c4174a69c22c608db515a3fe558ec0292c53ae56a43a945c8bb19bf4cdf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bb5bc7e6f182dd6271a6d16d8441e1f6833a32fd03858a3b91d16a896339c2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e477bf1f25d554946b3657a5bd18f922b18483902fd4f9c05a452a299cb4d920\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2260fe18b8f9f829f7dde85c9bdfc32895c8bfe48e0986c8f8205708e078ff37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:28Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:28 crc kubenswrapper[4833]: I0217 13:46:28.529251 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a207ffb6a17f606fde32eec5ae4222e2fdf02d79f61e4f959be55814c14daf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:28Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:28 crc kubenswrapper[4833]: I0217 13:46:28.542122 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b9b39aa9ac4f718b75bfeffba8e09bf2fefca68625037f565f3b237b6275db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:28Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:28 crc kubenswrapper[4833]: I0217 13:46:28.551816 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vx9xx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d13dbe6c-a57b-4011-9987-193ccf4939f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6637ecd7299bd61ba9b47f093048b122c90fc98990eb22155e8467c1dfd75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w7kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vx9xx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:28Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:28 crc kubenswrapper[4833]: I0217 13:46:28.566394 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4a1ca83-1919-4f9c-82de-c849cbd50e70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4ea9f4f4051db850a31f5e1081095664c8764e1033e81823e75cd0072d13a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nq5ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89a1061eccd396579d2995d2d4ee3bd83f65f7e06a951b4d77897e687a7dcb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nq5ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nmzvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:28Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:28 crc kubenswrapper[4833]: I0217 13:46:28.578221 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:28 crc kubenswrapper[4833]: I0217 13:46:28.578520 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:28 crc kubenswrapper[4833]: I0217 13:46:28.578634 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:28 crc kubenswrapper[4833]: I0217 13:46:28.578727 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:28 crc kubenswrapper[4833]: I0217 13:46:28.578816 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:28Z","lastTransitionTime":"2026-02-17T13:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:28 crc kubenswrapper[4833]: I0217 13:46:28.583173 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wxvlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eb74ba2-a87a-415f-8978-8ef706346aa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f08f2aa3f0028f17b13b73c760f1471c3f335f3ae02be305bddd63552ec7330b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2569baf4273e04fbca0b38e0a65c945ba4b2767cba6f2b6647012328c041bab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2569baf4273e04fbca0b38e0a65c945ba4b2767cba6f2b6647012328c041bab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec44770454d862d13715225217505d12ffefd3f5a8dae0912a7ba8a9fbccb0bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec44770454d862d13715225217505d12ffefd3f5a8dae0912a7ba8a9fbccb0bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d28180d5fc08486a1492239ab1107c9b3856334091da9d2ad7ae0e6086fbd00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d28180d5fc08486a1492239ab1107c9b3856334091da9d2ad7ae0e6086fbd00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79641e6aa81b861f7dd3dadd0cc23f677727909d85f017c761bcbbb19450b5e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79641e6aa81b861f7dd3dadd0cc23f677727909d85f017c761bcbbb19450b5e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f4ea02b7d74057eddb0946756236afdb129b5ad326b153d66357ee42caca78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f4ea02b7d74057eddb0946756236afdb129b5ad326b153d66357ee42caca78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcd939d8a8deab6a94b6df237bd627850f5bb37bd29bd0988d16452dcbd87c9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcd939d8a8deab6a94b6df237bd627850f5bb37bd29bd0988d16452dcbd87c9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wxvlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:28Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:28 crc kubenswrapper[4833]: I0217 13:46:28.595999 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e5a2da13ff704ed5e89100bd3a88a6349d446c2af48f88acb399efb43dc42d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac53462f7a94ff04dc94b5065169da7ec43985a5a7e7dba779f23614b61b78fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:28Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:28 crc kubenswrapper[4833]: I0217 13:46:28.605767 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:28Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:28 crc kubenswrapper[4833]: I0217 13:46:28.616299 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:28Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:28 crc kubenswrapper[4833]: I0217 13:46:28.632835 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e08f74a-75d7-4dce-9297-27140e91962a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a2013b70df34194f895ea7c2e5f2650966fa2d06518733fa4411893e5b2b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbc961a623ca1f4237f238856a41acfb724aadd5fa3feda70410c352b6f750c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80bdd44d3b326c9bb61f21b221c63d4647e2cc6edc95fe568b90e76f4329cf1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9bc54d1057546b9c8d7d1527bb2ae93c1c713b4d6e5879477157af90fc0d89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e88eaef317b2beb0c55811634e6752439b7c7725072f55a42f5ab49c2ac2385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:28Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:28 crc kubenswrapper[4833]: I0217 13:46:28.643395 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5991f03-6fb7-4425-b041-26d760e380ae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae2cc390253214c6979cba64584e59e0342c1f750b16c569acfd885cb6b36c31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9702bc05160617e0c8ac8fd3d9a81244b5be2bf955ef58f9408fc0b42bea6609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5529e4fe84d20c564ff55b47e32df67ca6aac40d2629a6b5bf96e03a64b79676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e89d83530ec38b869e22ff9125ce8e680c2a274e471aedd447c9daee233acb1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e89d83530ec38b869e22ff9125ce8e680c2a274e471aedd447c9daee233acb1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:28Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:28 crc kubenswrapper[4833]: I0217 13:46:28.653486 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:28Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:28 crc kubenswrapper[4833]: I0217 13:46:28.664506 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wlt4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efd76798e54bfcbad6d3a5f07396fe8579adcdb3d5bab3c303a9d31ad242e830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26d2bf39d36b6e8998e442f36f84198a363d7f22f56a05957a4a242459e6ff8b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T13:46:20Z\\\",\\\"message\\\":\\\"2026-02-17T13:45:34+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9e0f4ef6-02a5-40ec-920b-c377ffde18f5\\\\n2026-02-17T13:45:34+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9e0f4ef6-02a5-40ec-920b-c377ffde18f5 to /host/opt/cni/bin/\\\\n2026-02-17T13:45:35Z [verbose] multus-daemon started\\\\n2026-02-17T13:45:35Z [verbose] Readiness Indicator file check\\\\n2026-02-17T13:46:20Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9dxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wlt4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:28Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:28 crc kubenswrapper[4833]: I0217 13:46:28.680836 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:28 crc kubenswrapper[4833]: I0217 13:46:28.680868 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:28 crc kubenswrapper[4833]: I0217 13:46:28.680879 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:28 crc kubenswrapper[4833]: I0217 13:46:28.680896 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:28 crc kubenswrapper[4833]: I0217 13:46:28.680909 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:28Z","lastTransitionTime":"2026-02-17T13:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:28 crc kubenswrapper[4833]: I0217 13:46:28.681280 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72c5918a-056f-446c-b138-a1be7140a5b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0c4e266b444d8ac1a67f9b5a6705f4b871e3930daf7707cd30b589a4cce24ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a89a54d9bedad8afe45f34ce0bc2324d0ca8590c328aad9d1557b2d92e5b81fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5c74f469d2d07cfd265ee1f625b957efc0f9d1f0efd5f483c962abdfd66a080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ee6e033fa9b29f59b186e39ec83d162b40e36aab28bb1e568a54c351a8c0c79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9da7ba3cbbca328926f2372a7e581793373ec876c961f7988617c668d958b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42e357a961b833cb840c8b33ce0db65d57cc5fb37789b00a6bcd3eb67d8e33f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b53fbee4db34f53cf5d58fa925e793cae75e34d7aeac4b10518659e2b89e177b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b53fbee4db34f53cf5d58fa925e793cae75e34d7aeac4b10518659e2b89e177b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T13:46:26Z\\\",\\\"message\\\":\\\"e to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:26Z is after 2025-08-24T17:21:41Z]\\\\nI0217 13:46:26.957178 6879 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-vx9xx\\\\nI0217 13:46:26.957185 6879 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-vx9xx\\\\nI0217 13:46:26.957192 6879 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-vx9xx in node crc\\\\nI0217 13:46:26.957197 6879 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-vx9xx after 0 failed attempt(s)\\\\nI0217 13:46:26.957203 6879 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-vx9xx\\\\nI0217 13:46:26.957215 6879 obj_retry.go:303] Retry object setup: *\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:46:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-7r9gt_openshift-ovn-kubernetes(72c5918a-056f-446c-b138-a1be7140a5b0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fdc1a29a9f5b3be51beb5d28d7a09f671e4f9effeb9ed122da196da8623abe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7r9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:28Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:28 crc kubenswrapper[4833]: I0217 13:46:28.698550 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23fd4be9-debc-405d-ac05-5a5160593231\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f18bb5a144ef41de921b7b92ee560b76e3f9f9a626be4d5638db46e4688d256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84373535efe637bbd5cf8649523941f0e73ad51f06dd45c913741e2a8e7b775f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090ad37f43c1a891b1a2922b2a367e1b52ce56fd6a53e9e749f819bf281136a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://436df1e343e5ca9347dd98b9da03c74816b0bc0ec6cecd94dc3b21c38eea9b51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babf66dd8ad93003e9766e885c7626d4f6dab5bc46a6d714449a01065d7afea9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"message\\\":\\\"W0217 13:45:14.176886 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 13:45:14.177326 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771335914 cert, and key in /tmp/serving-cert-2529326345/serving-signer.crt, /tmp/serving-cert-2529326345/serving-signer.key\\\\nI0217 13:45:14.372413 1 observer_polling.go:159] Starting file observer\\\\nW0217 13:45:14.378661 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 13:45:14.378888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:45:14.382093 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529326345/tls.crt::/tmp/serving-cert-2529326345/tls.key\\\\\\\"\\\\nF0217 13:45:14.604297 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0450fd6122e12877e07084f693f67e63ac70bcecc962f7da5a32241ba14553\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:28Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:28 crc kubenswrapper[4833]: I0217 13:46:28.712481 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xwfsh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"962a90ec-217a-4df0-8d83-2e6663953088\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cf73503bb300f6f71385f8da17c04a4b5d84691d5d7ca3b363b2335e4905c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srrv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc181ab232df3bc23da46eabfb43d64f7c4e674152c27338d8ff5faf1c477bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srrv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xwfsh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:28Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:28 crc kubenswrapper[4833]: I0217 13:46:28.722543 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4b7xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft275\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft275\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4b7xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:28Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:28 crc kubenswrapper[4833]: I0217 13:46:28.732696 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5gxjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03c9f579-21ad-4977-a5e8-db9272a08557\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7400b87694dd74f41819e23ce3f617021e028cdcbfdad65b2213249f35d176d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s6vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5gxjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:28Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:28 crc kubenswrapper[4833]: I0217 13:46:28.783421 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:28 crc kubenswrapper[4833]: I0217 13:46:28.783680 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:28 crc kubenswrapper[4833]: I0217 13:46:28.783778 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:28 crc kubenswrapper[4833]: I0217 13:46:28.783865 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:28 crc kubenswrapper[4833]: I0217 13:46:28.783935 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:28Z","lastTransitionTime":"2026-02-17T13:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:28 crc kubenswrapper[4833]: I0217 13:46:28.886739 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:28 crc kubenswrapper[4833]: I0217 13:46:28.886795 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:28 crc kubenswrapper[4833]: I0217 13:46:28.886813 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:28 crc kubenswrapper[4833]: I0217 13:46:28.886859 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:28 crc kubenswrapper[4833]: I0217 13:46:28.886875 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:28Z","lastTransitionTime":"2026-02-17T13:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:28 crc kubenswrapper[4833]: I0217 13:46:28.990285 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:28 crc kubenswrapper[4833]: I0217 13:46:28.990353 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:28 crc kubenswrapper[4833]: I0217 13:46:28.990375 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:28 crc kubenswrapper[4833]: I0217 13:46:28.990402 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:28 crc kubenswrapper[4833]: I0217 13:46:28.990424 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:28Z","lastTransitionTime":"2026-02-17T13:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:29 crc kubenswrapper[4833]: I0217 13:46:29.037753 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 22:35:13.142739047 +0000 UTC Feb 17 13:46:29 crc kubenswrapper[4833]: I0217 13:46:29.041216 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:46:29 crc kubenswrapper[4833]: E0217 13:46:29.041582 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:46:29 crc kubenswrapper[4833]: I0217 13:46:29.093652 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:29 crc kubenswrapper[4833]: I0217 13:46:29.094108 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:29 crc kubenswrapper[4833]: I0217 13:46:29.094307 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:29 crc kubenswrapper[4833]: I0217 13:46:29.094479 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:29 crc kubenswrapper[4833]: I0217 13:46:29.094634 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:29Z","lastTransitionTime":"2026-02-17T13:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:29 crc kubenswrapper[4833]: I0217 13:46:29.197949 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:29 crc kubenswrapper[4833]: I0217 13:46:29.198019 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:29 crc kubenswrapper[4833]: I0217 13:46:29.198071 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:29 crc kubenswrapper[4833]: I0217 13:46:29.198109 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:29 crc kubenswrapper[4833]: I0217 13:46:29.198131 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:29Z","lastTransitionTime":"2026-02-17T13:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:29 crc kubenswrapper[4833]: I0217 13:46:29.300470 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:29 crc kubenswrapper[4833]: I0217 13:46:29.300530 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:29 crc kubenswrapper[4833]: I0217 13:46:29.300553 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:29 crc kubenswrapper[4833]: I0217 13:46:29.300583 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:29 crc kubenswrapper[4833]: I0217 13:46:29.300602 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:29Z","lastTransitionTime":"2026-02-17T13:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:29 crc kubenswrapper[4833]: I0217 13:46:29.403068 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:29 crc kubenswrapper[4833]: I0217 13:46:29.403129 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:29 crc kubenswrapper[4833]: I0217 13:46:29.403147 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:29 crc kubenswrapper[4833]: I0217 13:46:29.403169 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:29 crc kubenswrapper[4833]: I0217 13:46:29.403186 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:29Z","lastTransitionTime":"2026-02-17T13:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:29 crc kubenswrapper[4833]: I0217 13:46:29.505788 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:29 crc kubenswrapper[4833]: I0217 13:46:29.505858 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:29 crc kubenswrapper[4833]: I0217 13:46:29.505885 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:29 crc kubenswrapper[4833]: I0217 13:46:29.505915 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:29 crc kubenswrapper[4833]: I0217 13:46:29.505936 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:29Z","lastTransitionTime":"2026-02-17T13:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:29 crc kubenswrapper[4833]: I0217 13:46:29.610097 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:29 crc kubenswrapper[4833]: I0217 13:46:29.610173 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:29 crc kubenswrapper[4833]: I0217 13:46:29.610191 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:29 crc kubenswrapper[4833]: I0217 13:46:29.610218 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:29 crc kubenswrapper[4833]: I0217 13:46:29.610236 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:29Z","lastTransitionTime":"2026-02-17T13:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:29 crc kubenswrapper[4833]: I0217 13:46:29.713330 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:29 crc kubenswrapper[4833]: I0217 13:46:29.713858 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:29 crc kubenswrapper[4833]: I0217 13:46:29.714239 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:29 crc kubenswrapper[4833]: I0217 13:46:29.714420 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:29 crc kubenswrapper[4833]: I0217 13:46:29.714564 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:29Z","lastTransitionTime":"2026-02-17T13:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:29 crc kubenswrapper[4833]: I0217 13:46:29.818691 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:29 crc kubenswrapper[4833]: I0217 13:46:29.818744 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:29 crc kubenswrapper[4833]: I0217 13:46:29.818765 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:29 crc kubenswrapper[4833]: I0217 13:46:29.818789 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:29 crc kubenswrapper[4833]: I0217 13:46:29.818806 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:29Z","lastTransitionTime":"2026-02-17T13:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:29 crc kubenswrapper[4833]: I0217 13:46:29.922208 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:29 crc kubenswrapper[4833]: I0217 13:46:29.922283 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:29 crc kubenswrapper[4833]: I0217 13:46:29.922308 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:29 crc kubenswrapper[4833]: I0217 13:46:29.922338 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:29 crc kubenswrapper[4833]: I0217 13:46:29.922360 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:29Z","lastTransitionTime":"2026-02-17T13:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:30 crc kubenswrapper[4833]: I0217 13:46:30.025725 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:30 crc kubenswrapper[4833]: I0217 13:46:30.026090 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:30 crc kubenswrapper[4833]: I0217 13:46:30.026185 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:30 crc kubenswrapper[4833]: I0217 13:46:30.026286 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:30 crc kubenswrapper[4833]: I0217 13:46:30.026394 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:30Z","lastTransitionTime":"2026-02-17T13:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:30 crc kubenswrapper[4833]: I0217 13:46:30.039088 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 14:25:05.761895368 +0000 UTC Feb 17 13:46:30 crc kubenswrapper[4833]: I0217 13:46:30.041510 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:46:30 crc kubenswrapper[4833]: I0217 13:46:30.041562 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b7xf" Feb 17 13:46:30 crc kubenswrapper[4833]: I0217 13:46:30.041526 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:46:30 crc kubenswrapper[4833]: E0217 13:46:30.041701 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:46:30 crc kubenswrapper[4833]: E0217 13:46:30.041823 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:46:30 crc kubenswrapper[4833]: E0217 13:46:30.041940 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b7xf" podUID="892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c" Feb 17 13:46:30 crc kubenswrapper[4833]: I0217 13:46:30.129692 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:30 crc kubenswrapper[4833]: I0217 13:46:30.129765 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:30 crc kubenswrapper[4833]: I0217 13:46:30.129790 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:30 crc kubenswrapper[4833]: I0217 13:46:30.129821 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:30 crc kubenswrapper[4833]: I0217 13:46:30.129845 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:30Z","lastTransitionTime":"2026-02-17T13:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:30 crc kubenswrapper[4833]: I0217 13:46:30.234096 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:30 crc kubenswrapper[4833]: I0217 13:46:30.234203 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:30 crc kubenswrapper[4833]: I0217 13:46:30.234271 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:30 crc kubenswrapper[4833]: I0217 13:46:30.234300 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:30 crc kubenswrapper[4833]: I0217 13:46:30.234318 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:30Z","lastTransitionTime":"2026-02-17T13:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:30 crc kubenswrapper[4833]: I0217 13:46:30.337875 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:30 crc kubenswrapper[4833]: I0217 13:46:30.337918 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:30 crc kubenswrapper[4833]: I0217 13:46:30.337956 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:30 crc kubenswrapper[4833]: I0217 13:46:30.337973 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:30 crc kubenswrapper[4833]: I0217 13:46:30.337983 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:30Z","lastTransitionTime":"2026-02-17T13:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:30 crc kubenswrapper[4833]: I0217 13:46:30.439642 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:30 crc kubenswrapper[4833]: I0217 13:46:30.439683 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:30 crc kubenswrapper[4833]: I0217 13:46:30.439695 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:30 crc kubenswrapper[4833]: I0217 13:46:30.439719 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:30 crc kubenswrapper[4833]: I0217 13:46:30.439734 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:30Z","lastTransitionTime":"2026-02-17T13:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:30 crc kubenswrapper[4833]: I0217 13:46:30.543287 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:30 crc kubenswrapper[4833]: I0217 13:46:30.543361 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:30 crc kubenswrapper[4833]: I0217 13:46:30.543386 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:30 crc kubenswrapper[4833]: I0217 13:46:30.543420 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:30 crc kubenswrapper[4833]: I0217 13:46:30.543442 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:30Z","lastTransitionTime":"2026-02-17T13:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:30 crc kubenswrapper[4833]: I0217 13:46:30.646788 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:30 crc kubenswrapper[4833]: I0217 13:46:30.646851 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:30 crc kubenswrapper[4833]: I0217 13:46:30.646867 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:30 crc kubenswrapper[4833]: I0217 13:46:30.646891 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:30 crc kubenswrapper[4833]: I0217 13:46:30.646913 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:30Z","lastTransitionTime":"2026-02-17T13:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:30 crc kubenswrapper[4833]: I0217 13:46:30.750588 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:30 crc kubenswrapper[4833]: I0217 13:46:30.750643 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:30 crc kubenswrapper[4833]: I0217 13:46:30.750660 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:30 crc kubenswrapper[4833]: I0217 13:46:30.750682 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:30 crc kubenswrapper[4833]: I0217 13:46:30.750699 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:30Z","lastTransitionTime":"2026-02-17T13:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:30 crc kubenswrapper[4833]: I0217 13:46:30.853927 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:30 crc kubenswrapper[4833]: I0217 13:46:30.854304 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:30 crc kubenswrapper[4833]: I0217 13:46:30.854532 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:30 crc kubenswrapper[4833]: I0217 13:46:30.854813 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:30 crc kubenswrapper[4833]: I0217 13:46:30.854990 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:30Z","lastTransitionTime":"2026-02-17T13:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:30 crc kubenswrapper[4833]: I0217 13:46:30.958618 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:30 crc kubenswrapper[4833]: I0217 13:46:30.958697 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:30 crc kubenswrapper[4833]: I0217 13:46:30.958721 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:30 crc kubenswrapper[4833]: I0217 13:46:30.958746 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:30 crc kubenswrapper[4833]: I0217 13:46:30.958763 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:30Z","lastTransitionTime":"2026-02-17T13:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:31 crc kubenswrapper[4833]: I0217 13:46:31.040274 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 00:09:49.072152287 +0000 UTC Feb 17 13:46:31 crc kubenswrapper[4833]: I0217 13:46:31.040862 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:46:31 crc kubenswrapper[4833]: E0217 13:46:31.041417 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:46:31 crc kubenswrapper[4833]: I0217 13:46:31.057993 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wlt4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efd76798e54bfcbad6d3a5f07396fe8579adcdb3d5bab3c303a9d31ad242e830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26d2bf39d36b6e8998e442f36f84198a363d7f22f56a05957a4a242459e6ff8b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T13:46:20Z\\\",\\\"message\\\":\\\"2026-02-17T13:45:34+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9e0f4ef6-02a5-40ec-920b-c377ffde18f5\\\\n2026-02-17T13:45:34+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9e0f4ef6-02a5-40ec-920b-c377ffde18f5 to /host/opt/cni/bin/\\\\n2026-02-17T13:45:35Z [verbose] multus-daemon started\\\\n2026-02-17T13:45:35Z [verbose] Readiness Indicator file check\\\\n2026-02-17T13:46:20Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9dxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wlt4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:31 crc kubenswrapper[4833]: I0217 13:46:31.061778 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:31 crc kubenswrapper[4833]: I0217 13:46:31.061849 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:31 crc kubenswrapper[4833]: I0217 13:46:31.061867 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:31 crc kubenswrapper[4833]: I0217 13:46:31.061892 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:31 crc kubenswrapper[4833]: I0217 13:46:31.061910 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:31Z","lastTransitionTime":"2026-02-17T13:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:31 crc kubenswrapper[4833]: I0217 13:46:31.083714 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72c5918a-056f-446c-b138-a1be7140a5b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0c4e266b444d8ac1a67f9b5a6705f4b871e3930daf7707cd30b589a4cce24ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a89a54d9bedad8afe45f34ce0bc2324d0ca8590c328aad9d1557b2d92e5b81fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5c74f469d2d07cfd265ee1f625b957efc0f9d1f0efd5f483c962abdfd66a080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ee6e033fa9b29f59b186e39ec83d162b40e36aab28bb1e568a54c351a8c0c79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9da7ba3cbbca328926f2372a7e581793373ec876c961f7988617c668d958b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42e357a961b833cb840c8b33ce0db65d57cc5fb37789b00a6bcd3eb67d8e33f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b53fbee4db34f53cf5d58fa925e793cae75e34d7aeac4b10518659e2b89e177b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b53fbee4db34f53cf5d58fa925e793cae75e34d7aeac4b10518659e2b89e177b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T13:46:26Z\\\",\\\"message\\\":\\\"e to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:26Z is after 2025-08-24T17:21:41Z]\\\\nI0217 13:46:26.957178 6879 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-vx9xx\\\\nI0217 13:46:26.957185 6879 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-vx9xx\\\\nI0217 13:46:26.957192 6879 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-vx9xx in node crc\\\\nI0217 13:46:26.957197 6879 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-vx9xx after 0 failed attempt(s)\\\\nI0217 13:46:26.957203 6879 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-vx9xx\\\\nI0217 13:46:26.957215 6879 obj_retry.go:303] Retry object setup: *\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:46:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-7r9gt_openshift-ovn-kubernetes(72c5918a-056f-446c-b138-a1be7140a5b0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fdc1a29a9f5b3be51beb5d28d7a09f671e4f9effeb9ed122da196da8623abe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7r9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:31 crc kubenswrapper[4833]: I0217 13:46:31.104032 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23fd4be9-debc-405d-ac05-5a5160593231\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f18bb5a144ef41de921b7b92ee560b76e3f9f9a626be4d5638db46e4688d256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84373535efe637bbd5cf8649523941f0e73ad51f06dd45c913741e2a8e7b775f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090ad37f43c1a891b1a2922b2a367e1b52ce56fd6a53e9e749f819bf281136a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://436df1e343e5ca9347dd98b9da03c74816b0bc0ec6cecd94dc3b21c38eea9b51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babf66dd8ad93003e9766e885c7626d4f6dab5bc46a6d714449a01065d7afea9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"message\\\":\\\"W0217 13:45:14.176886 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 13:45:14.177326 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771335914 cert, and key in /tmp/serving-cert-2529326345/serving-signer.crt, /tmp/serving-cert-2529326345/serving-signer.key\\\\nI0217 13:45:14.372413 1 observer_polling.go:159] Starting file observer\\\\nW0217 13:45:14.378661 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 13:45:14.378888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:45:14.382093 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529326345/tls.crt::/tmp/serving-cert-2529326345/tls.key\\\\\\\"\\\\nF0217 13:45:14.604297 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0450fd6122e12877e07084f693f67e63ac70bcecc962f7da5a32241ba14553\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:31 crc kubenswrapper[4833]: I0217 13:46:31.130307 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e08f74a-75d7-4dce-9297-27140e91962a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a2013b70df34194f895ea7c2e5f2650966fa2d06518733fa4411893e5b2b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbc961a623ca1f4237f238856a41acfb724aadd5fa3feda70410c352b6f750c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80bdd44d3b326c9bb61f21b221c63d4647e2cc6edc95fe568b90e76f4329cf1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9bc54d1057546b9c8d7d1527bb2ae93c1c713b4d6e5879477157af90fc0d89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e88eaef317b2beb0c55811634e6752439b7c7725072f55a42f5ab49c2ac2385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:31 crc kubenswrapper[4833]: I0217 13:46:31.146095 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5991f03-6fb7-4425-b041-26d760e380ae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae2cc390253214c6979cba64584e59e0342c1f750b16c569acfd885cb6b36c31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9702bc05160617e0c8ac8fd3d9a81244b5be2bf955ef58f9408fc0b42bea6609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5529e4fe84d20c564ff55b47e32df67ca6aac40d2629a6b5bf96e03a64b79676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e89d83530ec38b869e22ff9125ce8e680c2a274e471aedd447c9daee233acb1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e89d83530ec38b869e22ff9125ce8e680c2a274e471aedd447c9daee233acb1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:31 crc kubenswrapper[4833]: I0217 13:46:31.163528 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:31 crc kubenswrapper[4833]: I0217 13:46:31.165988 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:31 crc kubenswrapper[4833]: I0217 13:46:31.166023 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:31 crc kubenswrapper[4833]: I0217 13:46:31.166054 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:31 crc kubenswrapper[4833]: I0217 13:46:31.166070 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:31 crc kubenswrapper[4833]: I0217 13:46:31.166082 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:31Z","lastTransitionTime":"2026-02-17T13:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:31 crc kubenswrapper[4833]: I0217 13:46:31.176614 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5gxjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03c9f579-21ad-4977-a5e8-db9272a08557\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7400b87694dd74f41819e23ce3f617021e028cdcbfdad65b2213249f35d176d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s6vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5gxjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:31 crc kubenswrapper[4833]: I0217 13:46:31.188962 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xwfsh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"962a90ec-217a-4df0-8d83-2e6663953088\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cf73503bb300f6f71385f8da17c04a4b5d84691d5d7ca3b363b2335e4905c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srrv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc181ab232df3bc23da46eabfb43d64f7c4e674152c27338d8ff5faf1c477bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srrv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xwfsh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:31 crc kubenswrapper[4833]: I0217 13:46:31.201755 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4b7xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft275\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft275\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4b7xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:31 crc kubenswrapper[4833]: I0217 13:46:31.214235 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vx9xx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d13dbe6c-a57b-4011-9987-193ccf4939f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6637ecd7299bd61ba9b47f093048b122c90fc98990eb22155e8467c1dfd75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w7kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vx9xx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:31 crc kubenswrapper[4833]: I0217 13:46:31.229425 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4a1ca83-1919-4f9c-82de-c849cbd50e70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4ea9f4f4051db850a31f5e1081095664c8764e1033e81823e75cd0072d13a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nq5ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89a1061eccd396579d2995d2d4ee3bd83f65f7e06a951b4d77897e687a7dcb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nq5ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nmzvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:31 crc kubenswrapper[4833]: I0217 13:46:31.245930 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wxvlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eb74ba2-a87a-415f-8978-8ef706346aa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f08f2aa3f0028f17b13b73c760f1471c3f335f3ae02be305bddd63552ec7330b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2569baf4273e04fbca0b38e0a65c945ba4b2767cba6f2b6647012328c041bab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2569baf4273e04fbca0b38e0a65c945ba4b2767cba6f2b6647012328c041bab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec44770454d862d13715225217505d12ffefd3f5a8dae0912a7ba8a9fbccb0bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec44770454d862d13715225217505d12ffefd3f5a8dae0912a7ba8a9fbccb0bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d28180d5fc08486a1492239ab1107c9b3856334091da9d2ad7ae0e6086fbd00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d28180d5fc08486a1492239ab1107c9b3856334091da9d2ad7ae0e6086fbd00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79641e6aa81b861f7dd3dadd0cc23f677727909d85f017c761bcbbb19450b5e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79641e6aa81b861f7dd3dadd0cc23f677727909d85f017c761bcbbb19450b5e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f4ea02b7d74057eddb0946756236afdb129b5ad326b153d66357ee42caca78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f4ea02b7d74057eddb0946756236afdb129b5ad326b153d66357ee42caca78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcd939d8a8deab6a94b6df237bd627850f5bb37bd29bd0988d16452dcbd87c9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcd939d8a8deab6a94b6df237bd627850f5bb37bd29bd0988d16452dcbd87c9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wxvlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:31 crc kubenswrapper[4833]: I0217 13:46:31.262192 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c1afc7-6071-464b-aedc-11234d869afe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e36c4174a69c22c608db515a3fe558ec0292c53ae56a43a945c8bb19bf4cdf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bb5bc7e6f182dd6271a6d16d8441e1f6833a32fd03858a3b91d16a896339c2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e477bf1f25d554946b3657a5bd18f922b18483902fd4f9c05a452a299cb4d920\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2260fe18b8f9f829f7dde85c9bdfc32895c8bfe48e0986c8f8205708e078ff37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:31 crc kubenswrapper[4833]: I0217 13:46:31.268627 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:31 crc kubenswrapper[4833]: I0217 13:46:31.268683 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:31 crc kubenswrapper[4833]: I0217 13:46:31.268698 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:31 crc kubenswrapper[4833]: I0217 13:46:31.268719 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:31 crc kubenswrapper[4833]: I0217 13:46:31.268735 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:31Z","lastTransitionTime":"2026-02-17T13:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:31 crc kubenswrapper[4833]: I0217 13:46:31.279578 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a207ffb6a17f606fde32eec5ae4222e2fdf02d79f61e4f959be55814c14daf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:31 crc kubenswrapper[4833]: I0217 13:46:31.292515 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b9b39aa9ac4f718b75bfeffba8e09bf2fefca68625037f565f3b237b6275db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:31 crc kubenswrapper[4833]: I0217 13:46:31.305499 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:31 crc kubenswrapper[4833]: I0217 13:46:31.318451 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e5a2da13ff704ed5e89100bd3a88a6349d446c2af48f88acb399efb43dc42d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac53462f7a94ff04dc94b5065169da7ec43985a5a7e7dba779f23614b61b78fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:31 crc kubenswrapper[4833]: I0217 13:46:31.331755 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:31 crc kubenswrapper[4833]: I0217 13:46:31.371319 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:31 crc kubenswrapper[4833]: I0217 13:46:31.371361 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:31 crc kubenswrapper[4833]: I0217 13:46:31.371373 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:31 crc kubenswrapper[4833]: I0217 13:46:31.371391 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:31 crc kubenswrapper[4833]: I0217 13:46:31.371404 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:31Z","lastTransitionTime":"2026-02-17T13:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:31 crc kubenswrapper[4833]: I0217 13:46:31.473836 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:31 crc kubenswrapper[4833]: I0217 13:46:31.473898 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:31 crc kubenswrapper[4833]: I0217 13:46:31.473915 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:31 crc kubenswrapper[4833]: I0217 13:46:31.473939 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:31 crc kubenswrapper[4833]: I0217 13:46:31.473958 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:31Z","lastTransitionTime":"2026-02-17T13:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:31 crc kubenswrapper[4833]: I0217 13:46:31.577061 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:31 crc kubenswrapper[4833]: I0217 13:46:31.577120 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:31 crc kubenswrapper[4833]: I0217 13:46:31.577133 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:31 crc kubenswrapper[4833]: I0217 13:46:31.577152 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:31 crc kubenswrapper[4833]: I0217 13:46:31.577165 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:31Z","lastTransitionTime":"2026-02-17T13:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:31 crc kubenswrapper[4833]: I0217 13:46:31.680930 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:31 crc kubenswrapper[4833]: I0217 13:46:31.681195 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:31 crc kubenswrapper[4833]: I0217 13:46:31.681209 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:31 crc kubenswrapper[4833]: I0217 13:46:31.681229 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:31 crc kubenswrapper[4833]: I0217 13:46:31.681242 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:31Z","lastTransitionTime":"2026-02-17T13:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:31 crc kubenswrapper[4833]: I0217 13:46:31.784297 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:31 crc kubenswrapper[4833]: I0217 13:46:31.784608 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:31 crc kubenswrapper[4833]: I0217 13:46:31.784784 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:31 crc kubenswrapper[4833]: I0217 13:46:31.784951 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:31 crc kubenswrapper[4833]: I0217 13:46:31.785149 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:31Z","lastTransitionTime":"2026-02-17T13:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:31 crc kubenswrapper[4833]: I0217 13:46:31.888495 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:31 crc kubenswrapper[4833]: I0217 13:46:31.888937 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:31 crc kubenswrapper[4833]: I0217 13:46:31.889195 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:31 crc kubenswrapper[4833]: I0217 13:46:31.889457 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:31 crc kubenswrapper[4833]: I0217 13:46:31.889752 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:31Z","lastTransitionTime":"2026-02-17T13:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:31 crc kubenswrapper[4833]: I0217 13:46:31.992412 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:31 crc kubenswrapper[4833]: I0217 13:46:31.992480 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:31 crc kubenswrapper[4833]: I0217 13:46:31.992500 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:31 crc kubenswrapper[4833]: I0217 13:46:31.992524 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:31 crc kubenswrapper[4833]: I0217 13:46:31.992541 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:31Z","lastTransitionTime":"2026-02-17T13:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:32 crc kubenswrapper[4833]: I0217 13:46:32.041291 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 03:49:28.11109835 +0000 UTC Feb 17 13:46:32 crc kubenswrapper[4833]: I0217 13:46:32.041553 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b7xf" Feb 17 13:46:32 crc kubenswrapper[4833]: I0217 13:46:32.041572 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:46:32 crc kubenswrapper[4833]: E0217 13:46:32.041678 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b7xf" podUID="892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c" Feb 17 13:46:32 crc kubenswrapper[4833]: I0217 13:46:32.041572 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:46:32 crc kubenswrapper[4833]: E0217 13:46:32.041794 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:46:32 crc kubenswrapper[4833]: E0217 13:46:32.041958 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:46:32 crc kubenswrapper[4833]: I0217 13:46:32.094761 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:32 crc kubenswrapper[4833]: I0217 13:46:32.094830 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:32 crc kubenswrapper[4833]: I0217 13:46:32.094852 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:32 crc kubenswrapper[4833]: I0217 13:46:32.094880 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:32 crc kubenswrapper[4833]: I0217 13:46:32.094907 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:32Z","lastTransitionTime":"2026-02-17T13:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:32 crc kubenswrapper[4833]: I0217 13:46:32.197852 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:32 crc kubenswrapper[4833]: I0217 13:46:32.197902 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:32 crc kubenswrapper[4833]: I0217 13:46:32.197915 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:32 crc kubenswrapper[4833]: I0217 13:46:32.197934 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:32 crc kubenswrapper[4833]: I0217 13:46:32.197948 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:32Z","lastTransitionTime":"2026-02-17T13:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:32 crc kubenswrapper[4833]: I0217 13:46:32.300756 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:32 crc kubenswrapper[4833]: I0217 13:46:32.300811 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:32 crc kubenswrapper[4833]: I0217 13:46:32.300821 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:32 crc kubenswrapper[4833]: I0217 13:46:32.300836 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:32 crc kubenswrapper[4833]: I0217 13:46:32.300851 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:32Z","lastTransitionTime":"2026-02-17T13:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:32 crc kubenswrapper[4833]: I0217 13:46:32.403520 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:32 crc kubenswrapper[4833]: I0217 13:46:32.403581 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:32 crc kubenswrapper[4833]: I0217 13:46:32.403600 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:32 crc kubenswrapper[4833]: I0217 13:46:32.403624 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:32 crc kubenswrapper[4833]: I0217 13:46:32.403644 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:32Z","lastTransitionTime":"2026-02-17T13:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:32 crc kubenswrapper[4833]: I0217 13:46:32.506433 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:32 crc kubenswrapper[4833]: I0217 13:46:32.506496 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:32 crc kubenswrapper[4833]: I0217 13:46:32.506513 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:32 crc kubenswrapper[4833]: I0217 13:46:32.506537 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:32 crc kubenswrapper[4833]: I0217 13:46:32.506556 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:32Z","lastTransitionTime":"2026-02-17T13:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:32 crc kubenswrapper[4833]: I0217 13:46:32.608930 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:32 crc kubenswrapper[4833]: I0217 13:46:32.608980 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:32 crc kubenswrapper[4833]: I0217 13:46:32.608995 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:32 crc kubenswrapper[4833]: I0217 13:46:32.609022 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:32 crc kubenswrapper[4833]: I0217 13:46:32.609070 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:32Z","lastTransitionTime":"2026-02-17T13:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:32 crc kubenswrapper[4833]: I0217 13:46:32.711675 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:32 crc kubenswrapper[4833]: I0217 13:46:32.711744 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:32 crc kubenswrapper[4833]: I0217 13:46:32.711767 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:32 crc kubenswrapper[4833]: I0217 13:46:32.711798 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:32 crc kubenswrapper[4833]: I0217 13:46:32.711819 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:32Z","lastTransitionTime":"2026-02-17T13:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:32 crc kubenswrapper[4833]: I0217 13:46:32.815412 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:32 crc kubenswrapper[4833]: I0217 13:46:32.815490 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:32 crc kubenswrapper[4833]: I0217 13:46:32.815531 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:32 crc kubenswrapper[4833]: I0217 13:46:32.815562 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:32 crc kubenswrapper[4833]: I0217 13:46:32.815587 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:32Z","lastTransitionTime":"2026-02-17T13:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:32 crc kubenswrapper[4833]: I0217 13:46:32.918736 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:32 crc kubenswrapper[4833]: I0217 13:46:32.918821 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:32 crc kubenswrapper[4833]: I0217 13:46:32.918848 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:32 crc kubenswrapper[4833]: I0217 13:46:32.918878 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:32 crc kubenswrapper[4833]: I0217 13:46:32.918901 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:32Z","lastTransitionTime":"2026-02-17T13:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:33 crc kubenswrapper[4833]: I0217 13:46:33.022505 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:33 crc kubenswrapper[4833]: I0217 13:46:33.022577 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:33 crc kubenswrapper[4833]: I0217 13:46:33.022595 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:33 crc kubenswrapper[4833]: I0217 13:46:33.022622 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:33 crc kubenswrapper[4833]: I0217 13:46:33.022639 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:33Z","lastTransitionTime":"2026-02-17T13:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:33 crc kubenswrapper[4833]: I0217 13:46:33.041355 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:46:33 crc kubenswrapper[4833]: E0217 13:46:33.041525 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:46:33 crc kubenswrapper[4833]: I0217 13:46:33.041620 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 20:48:59.149834723 +0000 UTC Feb 17 13:46:33 crc kubenswrapper[4833]: I0217 13:46:33.125585 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:33 crc kubenswrapper[4833]: I0217 13:46:33.125642 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:33 crc kubenswrapper[4833]: I0217 13:46:33.125658 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:33 crc kubenswrapper[4833]: I0217 13:46:33.125682 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:33 crc kubenswrapper[4833]: I0217 13:46:33.125702 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:33Z","lastTransitionTime":"2026-02-17T13:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:33 crc kubenswrapper[4833]: I0217 13:46:33.229573 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:33 crc kubenswrapper[4833]: I0217 13:46:33.229949 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:33 crc kubenswrapper[4833]: I0217 13:46:33.230156 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:33 crc kubenswrapper[4833]: I0217 13:46:33.230302 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:33 crc kubenswrapper[4833]: I0217 13:46:33.230435 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:33Z","lastTransitionTime":"2026-02-17T13:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:33 crc kubenswrapper[4833]: I0217 13:46:33.334288 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:33 crc kubenswrapper[4833]: I0217 13:46:33.334363 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:33 crc kubenswrapper[4833]: I0217 13:46:33.334387 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:33 crc kubenswrapper[4833]: I0217 13:46:33.334419 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:33 crc kubenswrapper[4833]: I0217 13:46:33.334441 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:33Z","lastTransitionTime":"2026-02-17T13:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:33 crc kubenswrapper[4833]: I0217 13:46:33.437374 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:33 crc kubenswrapper[4833]: I0217 13:46:33.437433 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:33 crc kubenswrapper[4833]: I0217 13:46:33.437450 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:33 crc kubenswrapper[4833]: I0217 13:46:33.437471 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:33 crc kubenswrapper[4833]: I0217 13:46:33.437489 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:33Z","lastTransitionTime":"2026-02-17T13:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:33 crc kubenswrapper[4833]: I0217 13:46:33.540195 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:33 crc kubenswrapper[4833]: I0217 13:46:33.540250 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:33 crc kubenswrapper[4833]: I0217 13:46:33.540336 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:33 crc kubenswrapper[4833]: I0217 13:46:33.540361 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:33 crc kubenswrapper[4833]: I0217 13:46:33.540377 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:33Z","lastTransitionTime":"2026-02-17T13:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:33 crc kubenswrapper[4833]: I0217 13:46:33.643283 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:33 crc kubenswrapper[4833]: I0217 13:46:33.643655 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:33 crc kubenswrapper[4833]: I0217 13:46:33.643817 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:33 crc kubenswrapper[4833]: I0217 13:46:33.643933 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:33 crc kubenswrapper[4833]: I0217 13:46:33.644234 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:33Z","lastTransitionTime":"2026-02-17T13:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:33 crc kubenswrapper[4833]: I0217 13:46:33.747275 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:33 crc kubenswrapper[4833]: I0217 13:46:33.747338 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:33 crc kubenswrapper[4833]: I0217 13:46:33.747362 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:33 crc kubenswrapper[4833]: I0217 13:46:33.747390 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:33 crc kubenswrapper[4833]: I0217 13:46:33.747411 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:33Z","lastTransitionTime":"2026-02-17T13:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:33 crc kubenswrapper[4833]: I0217 13:46:33.846583 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:46:33 crc kubenswrapper[4833]: E0217 13:46:33.846793 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:47:37.846760913 +0000 UTC m=+147.481860386 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:46:33 crc kubenswrapper[4833]: I0217 13:46:33.850608 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:33 crc kubenswrapper[4833]: I0217 13:46:33.850675 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:33 crc kubenswrapper[4833]: I0217 13:46:33.850691 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:33 crc kubenswrapper[4833]: I0217 13:46:33.850714 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:33 crc kubenswrapper[4833]: I0217 13:46:33.850734 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:33Z","lastTransitionTime":"2026-02-17T13:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:33 crc kubenswrapper[4833]: I0217 13:46:33.947866 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:46:33 crc kubenswrapper[4833]: I0217 13:46:33.947945 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:46:33 crc kubenswrapper[4833]: I0217 13:46:33.948003 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:46:33 crc kubenswrapper[4833]: I0217 13:46:33.948091 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:46:33 crc kubenswrapper[4833]: E0217 13:46:33.948242 4833 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 13:46:33 crc kubenswrapper[4833]: E0217 13:46:33.948319 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 13:47:37.948293606 +0000 UTC m=+147.583393079 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 13:46:33 crc kubenswrapper[4833]: E0217 13:46:33.948557 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 13:46:33 crc kubenswrapper[4833]: E0217 13:46:33.948647 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 13:46:33 crc kubenswrapper[4833]: E0217 13:46:33.948719 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 13:46:33 crc kubenswrapper[4833]: E0217 13:46:33.948746 4833 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 13:46:33 crc kubenswrapper[4833]: E0217 13:46:33.948659 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 13:46:33 crc kubenswrapper[4833]: E0217 13:46:33.948815 4833 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 13:46:33 crc kubenswrapper[4833]: E0217 13:46:33.948820 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 13:47:37.94879539 +0000 UTC m=+147.583894863 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 13:46:33 crc kubenswrapper[4833]: E0217 13:46:33.948857 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 13:47:37.948845771 +0000 UTC m=+147.583945214 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 13:46:33 crc kubenswrapper[4833]: E0217 13:46:33.948597 4833 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 13:46:33 crc kubenswrapper[4833]: E0217 13:46:33.948894 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 13:47:37.948886392 +0000 UTC m=+147.583985835 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 13:46:33 crc kubenswrapper[4833]: I0217 13:46:33.953665 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:33 crc kubenswrapper[4833]: I0217 13:46:33.953844 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:33 crc kubenswrapper[4833]: I0217 13:46:33.954019 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:33 crc kubenswrapper[4833]: I0217 13:46:33.954147 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:33 crc kubenswrapper[4833]: I0217 13:46:33.954250 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:33Z","lastTransitionTime":"2026-02-17T13:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:34 crc kubenswrapper[4833]: I0217 13:46:34.041239 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b7xf" Feb 17 13:46:34 crc kubenswrapper[4833]: I0217 13:46:34.041239 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:46:34 crc kubenswrapper[4833]: I0217 13:46:34.041277 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:46:34 crc kubenswrapper[4833]: I0217 13:46:34.041918 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 02:45:03.505066603 +0000 UTC Feb 17 13:46:34 crc kubenswrapper[4833]: E0217 13:46:34.042688 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b7xf" podUID="892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c" Feb 17 13:46:34 crc kubenswrapper[4833]: E0217 13:46:34.042830 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:46:34 crc kubenswrapper[4833]: E0217 13:46:34.042798 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:46:34 crc kubenswrapper[4833]: I0217 13:46:34.057223 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:34 crc kubenswrapper[4833]: I0217 13:46:34.057286 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:34 crc kubenswrapper[4833]: I0217 13:46:34.057314 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:34 crc kubenswrapper[4833]: I0217 13:46:34.057341 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:34 crc kubenswrapper[4833]: I0217 13:46:34.057363 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:34Z","lastTransitionTime":"2026-02-17T13:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:34 crc kubenswrapper[4833]: I0217 13:46:34.160930 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:34 crc kubenswrapper[4833]: I0217 13:46:34.160997 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:34 crc kubenswrapper[4833]: I0217 13:46:34.161015 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:34 crc kubenswrapper[4833]: I0217 13:46:34.161066 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:34 crc kubenswrapper[4833]: I0217 13:46:34.161084 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:34Z","lastTransitionTime":"2026-02-17T13:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:34 crc kubenswrapper[4833]: I0217 13:46:34.264159 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:34 crc kubenswrapper[4833]: I0217 13:46:34.264235 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:34 crc kubenswrapper[4833]: I0217 13:46:34.264260 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:34 crc kubenswrapper[4833]: I0217 13:46:34.264291 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:34 crc kubenswrapper[4833]: I0217 13:46:34.264315 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:34Z","lastTransitionTime":"2026-02-17T13:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:34 crc kubenswrapper[4833]: I0217 13:46:34.367775 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:34 crc kubenswrapper[4833]: I0217 13:46:34.367831 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:34 crc kubenswrapper[4833]: I0217 13:46:34.367851 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:34 crc kubenswrapper[4833]: I0217 13:46:34.367875 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:34 crc kubenswrapper[4833]: I0217 13:46:34.367891 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:34Z","lastTransitionTime":"2026-02-17T13:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:34 crc kubenswrapper[4833]: I0217 13:46:34.470150 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:34 crc kubenswrapper[4833]: I0217 13:46:34.470214 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:34 crc kubenswrapper[4833]: I0217 13:46:34.470232 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:34 crc kubenswrapper[4833]: I0217 13:46:34.470257 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:34 crc kubenswrapper[4833]: I0217 13:46:34.470274 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:34Z","lastTransitionTime":"2026-02-17T13:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:34 crc kubenswrapper[4833]: I0217 13:46:34.573200 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:34 crc kubenswrapper[4833]: I0217 13:46:34.573257 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:34 crc kubenswrapper[4833]: I0217 13:46:34.573282 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:34 crc kubenswrapper[4833]: I0217 13:46:34.573304 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:34 crc kubenswrapper[4833]: I0217 13:46:34.573317 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:34Z","lastTransitionTime":"2026-02-17T13:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:34 crc kubenswrapper[4833]: I0217 13:46:34.677448 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:34 crc kubenswrapper[4833]: I0217 13:46:34.677524 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:34 crc kubenswrapper[4833]: I0217 13:46:34.677544 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:34 crc kubenswrapper[4833]: I0217 13:46:34.677570 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:34 crc kubenswrapper[4833]: I0217 13:46:34.677593 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:34Z","lastTransitionTime":"2026-02-17T13:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:34 crc kubenswrapper[4833]: I0217 13:46:34.780641 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:34 crc kubenswrapper[4833]: I0217 13:46:34.780701 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:34 crc kubenswrapper[4833]: I0217 13:46:34.780718 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:34 crc kubenswrapper[4833]: I0217 13:46:34.780741 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:34 crc kubenswrapper[4833]: I0217 13:46:34.780759 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:34Z","lastTransitionTime":"2026-02-17T13:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:34 crc kubenswrapper[4833]: I0217 13:46:34.884101 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:34 crc kubenswrapper[4833]: I0217 13:46:34.884162 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:34 crc kubenswrapper[4833]: I0217 13:46:34.884185 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:34 crc kubenswrapper[4833]: I0217 13:46:34.884211 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:34 crc kubenswrapper[4833]: I0217 13:46:34.884229 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:34Z","lastTransitionTime":"2026-02-17T13:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:34 crc kubenswrapper[4833]: I0217 13:46:34.987281 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:34 crc kubenswrapper[4833]: I0217 13:46:34.987326 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:34 crc kubenswrapper[4833]: I0217 13:46:34.987339 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:34 crc kubenswrapper[4833]: I0217 13:46:34.987356 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:34 crc kubenswrapper[4833]: I0217 13:46:34.987368 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:34Z","lastTransitionTime":"2026-02-17T13:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:35 crc kubenswrapper[4833]: I0217 13:46:35.041547 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:46:35 crc kubenswrapper[4833]: E0217 13:46:35.041772 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:46:35 crc kubenswrapper[4833]: I0217 13:46:35.042191 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 13:22:21.83547081 +0000 UTC Feb 17 13:46:35 crc kubenswrapper[4833]: I0217 13:46:35.089340 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:35 crc kubenswrapper[4833]: I0217 13:46:35.089394 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:35 crc kubenswrapper[4833]: I0217 13:46:35.089414 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:35 crc kubenswrapper[4833]: I0217 13:46:35.089436 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:35 crc kubenswrapper[4833]: I0217 13:46:35.089453 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:35Z","lastTransitionTime":"2026-02-17T13:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:35 crc kubenswrapper[4833]: I0217 13:46:35.192064 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:35 crc kubenswrapper[4833]: I0217 13:46:35.192123 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:35 crc kubenswrapper[4833]: I0217 13:46:35.192144 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:35 crc kubenswrapper[4833]: I0217 13:46:35.192168 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:35 crc kubenswrapper[4833]: I0217 13:46:35.192185 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:35Z","lastTransitionTime":"2026-02-17T13:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:35 crc kubenswrapper[4833]: I0217 13:46:35.295265 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:35 crc kubenswrapper[4833]: I0217 13:46:35.295317 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:35 crc kubenswrapper[4833]: I0217 13:46:35.295334 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:35 crc kubenswrapper[4833]: I0217 13:46:35.295359 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:35 crc kubenswrapper[4833]: I0217 13:46:35.295377 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:35Z","lastTransitionTime":"2026-02-17T13:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:35 crc kubenswrapper[4833]: I0217 13:46:35.398701 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:35 crc kubenswrapper[4833]: I0217 13:46:35.398778 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:35 crc kubenswrapper[4833]: I0217 13:46:35.398796 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:35 crc kubenswrapper[4833]: I0217 13:46:35.398819 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:35 crc kubenswrapper[4833]: I0217 13:46:35.398836 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:35Z","lastTransitionTime":"2026-02-17T13:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:35 crc kubenswrapper[4833]: I0217 13:46:35.502119 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:35 crc kubenswrapper[4833]: I0217 13:46:35.502198 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:35 crc kubenswrapper[4833]: I0217 13:46:35.502216 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:35 crc kubenswrapper[4833]: I0217 13:46:35.502240 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:35 crc kubenswrapper[4833]: I0217 13:46:35.502258 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:35Z","lastTransitionTime":"2026-02-17T13:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:35 crc kubenswrapper[4833]: I0217 13:46:35.605377 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:35 crc kubenswrapper[4833]: I0217 13:46:35.605496 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:35 crc kubenswrapper[4833]: I0217 13:46:35.605528 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:35 crc kubenswrapper[4833]: I0217 13:46:35.605559 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:35 crc kubenswrapper[4833]: I0217 13:46:35.605579 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:35Z","lastTransitionTime":"2026-02-17T13:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:35 crc kubenswrapper[4833]: I0217 13:46:35.708650 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:35 crc kubenswrapper[4833]: I0217 13:46:35.708737 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:35 crc kubenswrapper[4833]: I0217 13:46:35.708771 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:35 crc kubenswrapper[4833]: I0217 13:46:35.708803 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:35 crc kubenswrapper[4833]: I0217 13:46:35.708824 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:35Z","lastTransitionTime":"2026-02-17T13:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:35 crc kubenswrapper[4833]: I0217 13:46:35.812669 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:35 crc kubenswrapper[4833]: I0217 13:46:35.812744 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:35 crc kubenswrapper[4833]: I0217 13:46:35.812759 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:35 crc kubenswrapper[4833]: I0217 13:46:35.812774 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:35 crc kubenswrapper[4833]: I0217 13:46:35.812812 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:35Z","lastTransitionTime":"2026-02-17T13:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:35 crc kubenswrapper[4833]: I0217 13:46:35.915792 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:35 crc kubenswrapper[4833]: I0217 13:46:35.916000 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:35 crc kubenswrapper[4833]: I0217 13:46:35.916167 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:35 crc kubenswrapper[4833]: I0217 13:46:35.916289 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:35 crc kubenswrapper[4833]: I0217 13:46:35.916390 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:35Z","lastTransitionTime":"2026-02-17T13:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:36 crc kubenswrapper[4833]: I0217 13:46:36.019399 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:36 crc kubenswrapper[4833]: I0217 13:46:36.019907 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:36 crc kubenswrapper[4833]: I0217 13:46:36.020089 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:36 crc kubenswrapper[4833]: I0217 13:46:36.020254 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:36 crc kubenswrapper[4833]: I0217 13:46:36.020400 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:36Z","lastTransitionTime":"2026-02-17T13:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:36 crc kubenswrapper[4833]: I0217 13:46:36.041027 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:46:36 crc kubenswrapper[4833]: I0217 13:46:36.041070 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b7xf" Feb 17 13:46:36 crc kubenswrapper[4833]: I0217 13:46:36.041067 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:46:36 crc kubenswrapper[4833]: E0217 13:46:36.041515 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b7xf" podUID="892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c" Feb 17 13:46:36 crc kubenswrapper[4833]: E0217 13:46:36.041234 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:46:36 crc kubenswrapper[4833]: E0217 13:46:36.041719 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:46:36 crc kubenswrapper[4833]: I0217 13:46:36.042573 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 17:41:43.749624112 +0000 UTC Feb 17 13:46:36 crc kubenswrapper[4833]: I0217 13:46:36.123157 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:36 crc kubenswrapper[4833]: I0217 13:46:36.123206 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:36 crc kubenswrapper[4833]: I0217 13:46:36.123222 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:36 crc kubenswrapper[4833]: I0217 13:46:36.123245 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:36 crc kubenswrapper[4833]: I0217 13:46:36.123263 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:36Z","lastTransitionTime":"2026-02-17T13:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:36 crc kubenswrapper[4833]: I0217 13:46:36.226908 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:36 crc kubenswrapper[4833]: I0217 13:46:36.227208 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:36 crc kubenswrapper[4833]: I0217 13:46:36.227352 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:36 crc kubenswrapper[4833]: I0217 13:46:36.227532 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:36 crc kubenswrapper[4833]: I0217 13:46:36.228186 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:36Z","lastTransitionTime":"2026-02-17T13:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:36 crc kubenswrapper[4833]: I0217 13:46:36.331022 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:36 crc kubenswrapper[4833]: I0217 13:46:36.331115 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:36 crc kubenswrapper[4833]: I0217 13:46:36.331136 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:36 crc kubenswrapper[4833]: I0217 13:46:36.331161 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:36 crc kubenswrapper[4833]: I0217 13:46:36.331178 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:36Z","lastTransitionTime":"2026-02-17T13:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:36 crc kubenswrapper[4833]: I0217 13:46:36.433977 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:36 crc kubenswrapper[4833]: I0217 13:46:36.434034 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:36 crc kubenswrapper[4833]: I0217 13:46:36.434075 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:36 crc kubenswrapper[4833]: I0217 13:46:36.434092 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:36 crc kubenswrapper[4833]: I0217 13:46:36.434104 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:36Z","lastTransitionTime":"2026-02-17T13:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:36 crc kubenswrapper[4833]: I0217 13:46:36.536603 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:36 crc kubenswrapper[4833]: I0217 13:46:36.536670 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:36 crc kubenswrapper[4833]: I0217 13:46:36.536680 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:36 crc kubenswrapper[4833]: I0217 13:46:36.536702 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:36 crc kubenswrapper[4833]: I0217 13:46:36.536718 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:36Z","lastTransitionTime":"2026-02-17T13:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:36 crc kubenswrapper[4833]: I0217 13:46:36.639859 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:36 crc kubenswrapper[4833]: I0217 13:46:36.639920 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:36 crc kubenswrapper[4833]: I0217 13:46:36.639942 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:36 crc kubenswrapper[4833]: I0217 13:46:36.639975 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:36 crc kubenswrapper[4833]: I0217 13:46:36.640013 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:36Z","lastTransitionTime":"2026-02-17T13:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:36 crc kubenswrapper[4833]: I0217 13:46:36.743380 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:36 crc kubenswrapper[4833]: I0217 13:46:36.743437 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:36 crc kubenswrapper[4833]: I0217 13:46:36.743457 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:36 crc kubenswrapper[4833]: I0217 13:46:36.743479 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:36 crc kubenswrapper[4833]: I0217 13:46:36.743496 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:36Z","lastTransitionTime":"2026-02-17T13:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:36 crc kubenswrapper[4833]: I0217 13:46:36.846847 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:36 crc kubenswrapper[4833]: I0217 13:46:36.846920 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:36 crc kubenswrapper[4833]: I0217 13:46:36.846942 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:36 crc kubenswrapper[4833]: I0217 13:46:36.846967 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:36 crc kubenswrapper[4833]: I0217 13:46:36.846986 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:36Z","lastTransitionTime":"2026-02-17T13:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:36 crc kubenswrapper[4833]: I0217 13:46:36.950634 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:36 crc kubenswrapper[4833]: I0217 13:46:36.950696 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:36 crc kubenswrapper[4833]: I0217 13:46:36.950712 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:36 crc kubenswrapper[4833]: I0217 13:46:36.950735 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:36 crc kubenswrapper[4833]: I0217 13:46:36.950754 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:36Z","lastTransitionTime":"2026-02-17T13:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:37 crc kubenswrapper[4833]: I0217 13:46:37.041994 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:46:37 crc kubenswrapper[4833]: E0217 13:46:37.042230 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:46:37 crc kubenswrapper[4833]: I0217 13:46:37.043238 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 23:05:24.752519458 +0000 UTC Feb 17 13:46:37 crc kubenswrapper[4833]: I0217 13:46:37.054245 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:37 crc kubenswrapper[4833]: I0217 13:46:37.054294 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:37 crc kubenswrapper[4833]: I0217 13:46:37.054310 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:37 crc kubenswrapper[4833]: I0217 13:46:37.054340 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:37 crc kubenswrapper[4833]: I0217 13:46:37.054376 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:37Z","lastTransitionTime":"2026-02-17T13:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:37 crc kubenswrapper[4833]: I0217 13:46:37.059529 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 17 13:46:37 crc kubenswrapper[4833]: I0217 13:46:37.157397 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:37 crc kubenswrapper[4833]: I0217 13:46:37.157450 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:37 crc kubenswrapper[4833]: I0217 13:46:37.157466 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:37 crc kubenswrapper[4833]: I0217 13:46:37.157490 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:37 crc kubenswrapper[4833]: I0217 13:46:37.157507 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:37Z","lastTransitionTime":"2026-02-17T13:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:37 crc kubenswrapper[4833]: I0217 13:46:37.259873 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:37 crc kubenswrapper[4833]: I0217 13:46:37.259924 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:37 crc kubenswrapper[4833]: I0217 13:46:37.259940 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:37 crc kubenswrapper[4833]: I0217 13:46:37.259961 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:37 crc kubenswrapper[4833]: I0217 13:46:37.259976 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:37Z","lastTransitionTime":"2026-02-17T13:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:37 crc kubenswrapper[4833]: I0217 13:46:37.362658 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:37 crc kubenswrapper[4833]: I0217 13:46:37.362717 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:37 crc kubenswrapper[4833]: I0217 13:46:37.362735 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:37 crc kubenswrapper[4833]: I0217 13:46:37.362757 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:37 crc kubenswrapper[4833]: I0217 13:46:37.362775 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:37Z","lastTransitionTime":"2026-02-17T13:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:37 crc kubenswrapper[4833]: I0217 13:46:37.465342 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:37 crc kubenswrapper[4833]: I0217 13:46:37.465417 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:37 crc kubenswrapper[4833]: I0217 13:46:37.465438 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:37 crc kubenswrapper[4833]: I0217 13:46:37.465462 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:37 crc kubenswrapper[4833]: I0217 13:46:37.465479 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:37Z","lastTransitionTime":"2026-02-17T13:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:37 crc kubenswrapper[4833]: I0217 13:46:37.568619 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:37 crc kubenswrapper[4833]: I0217 13:46:37.568662 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:37 crc kubenswrapper[4833]: I0217 13:46:37.568672 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:37 crc kubenswrapper[4833]: I0217 13:46:37.568706 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:37 crc kubenswrapper[4833]: I0217 13:46:37.568718 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:37Z","lastTransitionTime":"2026-02-17T13:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:37 crc kubenswrapper[4833]: I0217 13:46:37.671396 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:37 crc kubenswrapper[4833]: I0217 13:46:37.671474 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:37 crc kubenswrapper[4833]: I0217 13:46:37.671496 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:37 crc kubenswrapper[4833]: I0217 13:46:37.671524 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:37 crc kubenswrapper[4833]: I0217 13:46:37.671548 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:37Z","lastTransitionTime":"2026-02-17T13:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:37 crc kubenswrapper[4833]: I0217 13:46:37.774020 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:37 crc kubenswrapper[4833]: I0217 13:46:37.774123 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:37 crc kubenswrapper[4833]: I0217 13:46:37.774142 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:37 crc kubenswrapper[4833]: I0217 13:46:37.774166 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:37 crc kubenswrapper[4833]: I0217 13:46:37.774184 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:37Z","lastTransitionTime":"2026-02-17T13:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:37 crc kubenswrapper[4833]: I0217 13:46:37.876395 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:37 crc kubenswrapper[4833]: I0217 13:46:37.876466 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:37 crc kubenswrapper[4833]: I0217 13:46:37.876490 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:37 crc kubenswrapper[4833]: I0217 13:46:37.876521 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:37 crc kubenswrapper[4833]: I0217 13:46:37.876542 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:37Z","lastTransitionTime":"2026-02-17T13:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:37 crc kubenswrapper[4833]: I0217 13:46:37.979176 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:37 crc kubenswrapper[4833]: I0217 13:46:37.979237 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:37 crc kubenswrapper[4833]: I0217 13:46:37.979255 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:37 crc kubenswrapper[4833]: I0217 13:46:37.979280 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:37 crc kubenswrapper[4833]: I0217 13:46:37.979298 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:37Z","lastTransitionTime":"2026-02-17T13:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:38 crc kubenswrapper[4833]: I0217 13:46:38.039874 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:38 crc kubenswrapper[4833]: I0217 13:46:38.039929 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:38 crc kubenswrapper[4833]: I0217 13:46:38.039948 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:38 crc kubenswrapper[4833]: I0217 13:46:38.039970 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:38 crc kubenswrapper[4833]: I0217 13:46:38.039987 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:38Z","lastTransitionTime":"2026-02-17T13:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:38 crc kubenswrapper[4833]: I0217 13:46:38.041437 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b7xf" Feb 17 13:46:38 crc kubenswrapper[4833]: I0217 13:46:38.041491 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:46:38 crc kubenswrapper[4833]: I0217 13:46:38.041502 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:46:38 crc kubenswrapper[4833]: E0217 13:46:38.041658 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b7xf" podUID="892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c" Feb 17 13:46:38 crc kubenswrapper[4833]: E0217 13:46:38.041807 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:46:38 crc kubenswrapper[4833]: E0217 13:46:38.041878 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:46:38 crc kubenswrapper[4833]: I0217 13:46:38.043342 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 07:44:49.823213056 +0000 UTC Feb 17 13:46:38 crc kubenswrapper[4833]: E0217 13:46:38.060173 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"858a93ea-15f0-4bac-8fa3-badb79f68871\\\",\\\"systemUUID\\\":\\\"648b67a2-27e7-447a-8ad2-7acc2e737df4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:38Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:38 crc kubenswrapper[4833]: I0217 13:46:38.066541 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:38 crc kubenswrapper[4833]: I0217 13:46:38.066643 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:38 crc kubenswrapper[4833]: I0217 13:46:38.066668 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:38 crc kubenswrapper[4833]: I0217 13:46:38.066703 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:38 crc kubenswrapper[4833]: I0217 13:46:38.066726 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:38Z","lastTransitionTime":"2026-02-17T13:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:38 crc kubenswrapper[4833]: E0217 13:46:38.090131 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"858a93ea-15f0-4bac-8fa3-badb79f68871\\\",\\\"systemUUID\\\":\\\"648b67a2-27e7-447a-8ad2-7acc2e737df4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:38Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:38 crc kubenswrapper[4833]: I0217 13:46:38.096197 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:38 crc kubenswrapper[4833]: I0217 13:46:38.096260 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:38 crc kubenswrapper[4833]: I0217 13:46:38.096276 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:38 crc kubenswrapper[4833]: I0217 13:46:38.096306 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:38 crc kubenswrapper[4833]: I0217 13:46:38.096331 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:38Z","lastTransitionTime":"2026-02-17T13:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:38 crc kubenswrapper[4833]: E0217 13:46:38.116199 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"858a93ea-15f0-4bac-8fa3-badb79f68871\\\",\\\"systemUUID\\\":\\\"648b67a2-27e7-447a-8ad2-7acc2e737df4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:38Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:38 crc kubenswrapper[4833]: I0217 13:46:38.122389 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:38 crc kubenswrapper[4833]: I0217 13:46:38.122452 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:38 crc kubenswrapper[4833]: I0217 13:46:38.122472 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:38 crc kubenswrapper[4833]: I0217 13:46:38.122497 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:38 crc kubenswrapper[4833]: I0217 13:46:38.122513 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:38Z","lastTransitionTime":"2026-02-17T13:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:38 crc kubenswrapper[4833]: E0217 13:46:38.142649 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"858a93ea-15f0-4bac-8fa3-badb79f68871\\\",\\\"systemUUID\\\":\\\"648b67a2-27e7-447a-8ad2-7acc2e737df4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:38Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:38 crc kubenswrapper[4833]: I0217 13:46:38.147816 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:38 crc kubenswrapper[4833]: I0217 13:46:38.147868 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:38 crc kubenswrapper[4833]: I0217 13:46:38.147926 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:38 crc kubenswrapper[4833]: I0217 13:46:38.147950 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:38 crc kubenswrapper[4833]: I0217 13:46:38.147969 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:38Z","lastTransitionTime":"2026-02-17T13:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:38 crc kubenswrapper[4833]: E0217 13:46:38.171731 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"858a93ea-15f0-4bac-8fa3-badb79f68871\\\",\\\"systemUUID\\\":\\\"648b67a2-27e7-447a-8ad2-7acc2e737df4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:38Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:38 crc kubenswrapper[4833]: E0217 13:46:38.171974 4833 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 13:46:38 crc kubenswrapper[4833]: I0217 13:46:38.174277 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:38 crc kubenswrapper[4833]: I0217 13:46:38.174322 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:38 crc kubenswrapper[4833]: I0217 13:46:38.174339 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:38 crc kubenswrapper[4833]: I0217 13:46:38.174364 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:38 crc kubenswrapper[4833]: I0217 13:46:38.174405 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:38Z","lastTransitionTime":"2026-02-17T13:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:38 crc kubenswrapper[4833]: I0217 13:46:38.279108 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:38 crc kubenswrapper[4833]: I0217 13:46:38.279197 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:38 crc kubenswrapper[4833]: I0217 13:46:38.279215 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:38 crc kubenswrapper[4833]: I0217 13:46:38.279268 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:38 crc kubenswrapper[4833]: I0217 13:46:38.279286 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:38Z","lastTransitionTime":"2026-02-17T13:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:38 crc kubenswrapper[4833]: I0217 13:46:38.383112 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:38 crc kubenswrapper[4833]: I0217 13:46:38.383161 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:38 crc kubenswrapper[4833]: I0217 13:46:38.383173 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:38 crc kubenswrapper[4833]: I0217 13:46:38.383192 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:38 crc kubenswrapper[4833]: I0217 13:46:38.383210 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:38Z","lastTransitionTime":"2026-02-17T13:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:38 crc kubenswrapper[4833]: I0217 13:46:38.485977 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:38 crc kubenswrapper[4833]: I0217 13:46:38.486023 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:38 crc kubenswrapper[4833]: I0217 13:46:38.486053 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:38 crc kubenswrapper[4833]: I0217 13:46:38.486070 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:38 crc kubenswrapper[4833]: I0217 13:46:38.486107 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:38Z","lastTransitionTime":"2026-02-17T13:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:38 crc kubenswrapper[4833]: I0217 13:46:38.588205 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:38 crc kubenswrapper[4833]: I0217 13:46:38.588245 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:38 crc kubenswrapper[4833]: I0217 13:46:38.588253 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:38 crc kubenswrapper[4833]: I0217 13:46:38.588268 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:38 crc kubenswrapper[4833]: I0217 13:46:38.588278 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:38Z","lastTransitionTime":"2026-02-17T13:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:38 crc kubenswrapper[4833]: I0217 13:46:38.691384 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:38 crc kubenswrapper[4833]: I0217 13:46:38.691452 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:38 crc kubenswrapper[4833]: I0217 13:46:38.691467 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:38 crc kubenswrapper[4833]: I0217 13:46:38.691486 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:38 crc kubenswrapper[4833]: I0217 13:46:38.691503 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:38Z","lastTransitionTime":"2026-02-17T13:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:38 crc kubenswrapper[4833]: I0217 13:46:38.793703 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:38 crc kubenswrapper[4833]: I0217 13:46:38.793746 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:38 crc kubenswrapper[4833]: I0217 13:46:38.793757 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:38 crc kubenswrapper[4833]: I0217 13:46:38.793773 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:38 crc kubenswrapper[4833]: I0217 13:46:38.793784 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:38Z","lastTransitionTime":"2026-02-17T13:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:38 crc kubenswrapper[4833]: I0217 13:46:38.896594 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:38 crc kubenswrapper[4833]: I0217 13:46:38.896678 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:38 crc kubenswrapper[4833]: I0217 13:46:38.896707 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:38 crc kubenswrapper[4833]: I0217 13:46:38.896739 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:38 crc kubenswrapper[4833]: I0217 13:46:38.896761 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:38Z","lastTransitionTime":"2026-02-17T13:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:39 crc kubenswrapper[4833]: I0217 13:46:38.999946 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:39 crc kubenswrapper[4833]: I0217 13:46:39.000007 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:39 crc kubenswrapper[4833]: I0217 13:46:39.000025 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:39 crc kubenswrapper[4833]: I0217 13:46:39.000080 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:39 crc kubenswrapper[4833]: I0217 13:46:39.000100 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:39Z","lastTransitionTime":"2026-02-17T13:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:39 crc kubenswrapper[4833]: I0217 13:46:39.040782 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:46:39 crc kubenswrapper[4833]: E0217 13:46:39.041346 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:46:39 crc kubenswrapper[4833]: I0217 13:46:39.043613 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 23:21:47.741471106 +0000 UTC Feb 17 13:46:39 crc kubenswrapper[4833]: I0217 13:46:39.102728 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:39 crc kubenswrapper[4833]: I0217 13:46:39.102774 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:39 crc kubenswrapper[4833]: I0217 13:46:39.102787 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:39 crc kubenswrapper[4833]: I0217 13:46:39.102811 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:39 crc kubenswrapper[4833]: I0217 13:46:39.102823 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:39Z","lastTransitionTime":"2026-02-17T13:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:39 crc kubenswrapper[4833]: I0217 13:46:39.205639 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:39 crc kubenswrapper[4833]: I0217 13:46:39.205721 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:39 crc kubenswrapper[4833]: I0217 13:46:39.205741 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:39 crc kubenswrapper[4833]: I0217 13:46:39.205769 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:39 crc kubenswrapper[4833]: I0217 13:46:39.205795 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:39Z","lastTransitionTime":"2026-02-17T13:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:39 crc kubenswrapper[4833]: I0217 13:46:39.309276 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:39 crc kubenswrapper[4833]: I0217 13:46:39.309351 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:39 crc kubenswrapper[4833]: I0217 13:46:39.309409 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:39 crc kubenswrapper[4833]: I0217 13:46:39.309433 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:39 crc kubenswrapper[4833]: I0217 13:46:39.309451 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:39Z","lastTransitionTime":"2026-02-17T13:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:39 crc kubenswrapper[4833]: I0217 13:46:39.413012 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:39 crc kubenswrapper[4833]: I0217 13:46:39.413116 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:39 crc kubenswrapper[4833]: I0217 13:46:39.413134 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:39 crc kubenswrapper[4833]: I0217 13:46:39.413652 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:39 crc kubenswrapper[4833]: I0217 13:46:39.413708 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:39Z","lastTransitionTime":"2026-02-17T13:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:39 crc kubenswrapper[4833]: I0217 13:46:39.517546 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:39 crc kubenswrapper[4833]: I0217 13:46:39.517589 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:39 crc kubenswrapper[4833]: I0217 13:46:39.517609 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:39 crc kubenswrapper[4833]: I0217 13:46:39.517651 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:39 crc kubenswrapper[4833]: I0217 13:46:39.517679 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:39Z","lastTransitionTime":"2026-02-17T13:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:39 crc kubenswrapper[4833]: I0217 13:46:39.620003 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:39 crc kubenswrapper[4833]: I0217 13:46:39.620072 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:39 crc kubenswrapper[4833]: I0217 13:46:39.620089 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:39 crc kubenswrapper[4833]: I0217 13:46:39.620108 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:39 crc kubenswrapper[4833]: I0217 13:46:39.620120 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:39Z","lastTransitionTime":"2026-02-17T13:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:39 crc kubenswrapper[4833]: I0217 13:46:39.722986 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:39 crc kubenswrapper[4833]: I0217 13:46:39.723067 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:39 crc kubenswrapper[4833]: I0217 13:46:39.723080 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:39 crc kubenswrapper[4833]: I0217 13:46:39.723097 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:39 crc kubenswrapper[4833]: I0217 13:46:39.723110 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:39Z","lastTransitionTime":"2026-02-17T13:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:39 crc kubenswrapper[4833]: I0217 13:46:39.825773 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:39 crc kubenswrapper[4833]: I0217 13:46:39.825833 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:39 crc kubenswrapper[4833]: I0217 13:46:39.825854 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:39 crc kubenswrapper[4833]: I0217 13:46:39.825884 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:39 crc kubenswrapper[4833]: I0217 13:46:39.825907 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:39Z","lastTransitionTime":"2026-02-17T13:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:39 crc kubenswrapper[4833]: I0217 13:46:39.928984 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:39 crc kubenswrapper[4833]: I0217 13:46:39.929065 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:39 crc kubenswrapper[4833]: I0217 13:46:39.929084 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:39 crc kubenswrapper[4833]: I0217 13:46:39.929107 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:39 crc kubenswrapper[4833]: I0217 13:46:39.929124 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:39Z","lastTransitionTime":"2026-02-17T13:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:40 crc kubenswrapper[4833]: I0217 13:46:40.032277 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:40 crc kubenswrapper[4833]: I0217 13:46:40.032340 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:40 crc kubenswrapper[4833]: I0217 13:46:40.032357 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:40 crc kubenswrapper[4833]: I0217 13:46:40.032383 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:40 crc kubenswrapper[4833]: I0217 13:46:40.032405 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:40Z","lastTransitionTime":"2026-02-17T13:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:40 crc kubenswrapper[4833]: I0217 13:46:40.040858 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:46:40 crc kubenswrapper[4833]: I0217 13:46:40.040977 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b7xf" Feb 17 13:46:40 crc kubenswrapper[4833]: I0217 13:46:40.040879 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:46:40 crc kubenswrapper[4833]: E0217 13:46:40.041033 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:46:40 crc kubenswrapper[4833]: E0217 13:46:40.041884 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b7xf" podUID="892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c" Feb 17 13:46:40 crc kubenswrapper[4833]: E0217 13:46:40.041738 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:46:40 crc kubenswrapper[4833]: I0217 13:46:40.042407 4833 scope.go:117] "RemoveContainer" containerID="b53fbee4db34f53cf5d58fa925e793cae75e34d7aeac4b10518659e2b89e177b" Feb 17 13:46:40 crc kubenswrapper[4833]: E0217 13:46:40.042681 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-7r9gt_openshift-ovn-kubernetes(72c5918a-056f-446c-b138-a1be7140a5b0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" podUID="72c5918a-056f-446c-b138-a1be7140a5b0" Feb 17 13:46:40 crc kubenswrapper[4833]: I0217 13:46:40.043801 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 14:20:19.265389416 +0000 UTC Feb 17 13:46:40 crc kubenswrapper[4833]: I0217 13:46:40.135103 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:40 crc kubenswrapper[4833]: I0217 13:46:40.135165 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:40 crc kubenswrapper[4833]: I0217 13:46:40.135184 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:40 crc kubenswrapper[4833]: I0217 13:46:40.135207 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:40 crc kubenswrapper[4833]: I0217 13:46:40.135225 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:40Z","lastTransitionTime":"2026-02-17T13:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:40 crc kubenswrapper[4833]: I0217 13:46:40.238653 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:40 crc kubenswrapper[4833]: I0217 13:46:40.238837 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:40 crc kubenswrapper[4833]: I0217 13:46:40.238912 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:40 crc kubenswrapper[4833]: I0217 13:46:40.238947 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:40 crc kubenswrapper[4833]: I0217 13:46:40.239097 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:40Z","lastTransitionTime":"2026-02-17T13:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:40 crc kubenswrapper[4833]: I0217 13:46:40.341786 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:40 crc kubenswrapper[4833]: I0217 13:46:40.341870 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:40 crc kubenswrapper[4833]: I0217 13:46:40.341887 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:40 crc kubenswrapper[4833]: I0217 13:46:40.341910 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:40 crc kubenswrapper[4833]: I0217 13:46:40.341927 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:40Z","lastTransitionTime":"2026-02-17T13:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:40 crc kubenswrapper[4833]: I0217 13:46:40.445563 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:40 crc kubenswrapper[4833]: I0217 13:46:40.445628 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:40 crc kubenswrapper[4833]: I0217 13:46:40.445651 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:40 crc kubenswrapper[4833]: I0217 13:46:40.445678 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:40 crc kubenswrapper[4833]: I0217 13:46:40.445698 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:40Z","lastTransitionTime":"2026-02-17T13:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:40 crc kubenswrapper[4833]: I0217 13:46:40.548879 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:40 crc kubenswrapper[4833]: I0217 13:46:40.548947 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:40 crc kubenswrapper[4833]: I0217 13:46:40.548965 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:40 crc kubenswrapper[4833]: I0217 13:46:40.548994 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:40 crc kubenswrapper[4833]: I0217 13:46:40.549014 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:40Z","lastTransitionTime":"2026-02-17T13:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:40 crc kubenswrapper[4833]: I0217 13:46:40.652225 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:40 crc kubenswrapper[4833]: I0217 13:46:40.652260 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:40 crc kubenswrapper[4833]: I0217 13:46:40.652268 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:40 crc kubenswrapper[4833]: I0217 13:46:40.652282 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:40 crc kubenswrapper[4833]: I0217 13:46:40.652291 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:40Z","lastTransitionTime":"2026-02-17T13:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:40 crc kubenswrapper[4833]: I0217 13:46:40.755496 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:40 crc kubenswrapper[4833]: I0217 13:46:40.755623 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:40 crc kubenswrapper[4833]: I0217 13:46:40.755645 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:40 crc kubenswrapper[4833]: I0217 13:46:40.755668 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:40 crc kubenswrapper[4833]: I0217 13:46:40.755685 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:40Z","lastTransitionTime":"2026-02-17T13:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:40 crc kubenswrapper[4833]: I0217 13:46:40.859066 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:40 crc kubenswrapper[4833]: I0217 13:46:40.859109 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:40 crc kubenswrapper[4833]: I0217 13:46:40.859123 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:40 crc kubenswrapper[4833]: I0217 13:46:40.859143 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:40 crc kubenswrapper[4833]: I0217 13:46:40.859158 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:40Z","lastTransitionTime":"2026-02-17T13:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:40 crc kubenswrapper[4833]: I0217 13:46:40.961690 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:40 crc kubenswrapper[4833]: I0217 13:46:40.961864 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:40 crc kubenswrapper[4833]: I0217 13:46:40.961895 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:40 crc kubenswrapper[4833]: I0217 13:46:40.961922 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:40 crc kubenswrapper[4833]: I0217 13:46:40.961939 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:40Z","lastTransitionTime":"2026-02-17T13:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:41 crc kubenswrapper[4833]: I0217 13:46:41.041766 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:46:41 crc kubenswrapper[4833]: E0217 13:46:41.042193 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:46:41 crc kubenswrapper[4833]: I0217 13:46:41.044183 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 14:58:45.620312793 +0000 UTC Feb 17 13:46:41 crc kubenswrapper[4833]: I0217 13:46:41.059166 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b9b39aa9ac4f718b75bfeffba8e09bf2fefca68625037f565f3b237b6275db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:41Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:41 crc kubenswrapper[4833]: I0217 13:46:41.064412 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:41 crc kubenswrapper[4833]: I0217 13:46:41.064447 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:41 crc kubenswrapper[4833]: I0217 13:46:41.064463 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:41 crc kubenswrapper[4833]: I0217 13:46:41.064484 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:41 crc kubenswrapper[4833]: I0217 13:46:41.064498 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:41Z","lastTransitionTime":"2026-02-17T13:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:41 crc kubenswrapper[4833]: I0217 13:46:41.071483 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vx9xx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d13dbe6c-a57b-4011-9987-193ccf4939f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6637ecd7299bd61ba9b47f093048b122c90fc98990eb22155e8467c1dfd75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w7kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vx9xx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:41Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:41 crc kubenswrapper[4833]: I0217 13:46:41.087821 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4a1ca83-1919-4f9c-82de-c849cbd50e70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4ea9f4f4051db850a31f5e1081095664c8764e1033e81823e75cd0072d13a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nq5ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89a1061eccd396579d2995d2d4ee3bd83f65f7e06a951b4d77897e687a7dcb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nq5ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nmzvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:41Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:41 crc kubenswrapper[4833]: I0217 13:46:41.104687 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wxvlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eb74ba2-a87a-415f-8978-8ef706346aa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f08f2aa3f0028f17b13b73c760f1471c3f335f3ae02be305bddd63552ec7330b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2569baf4273e04fbca0b38e0a65c945ba4b2767cba6f2b6647012328c041bab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2569baf4273e04fbca0b38e0a65c945ba4b2767cba6f2b6647012328c041bab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec44770454d862d13715225217505d12ffefd3f5a8dae0912a7ba8a9fbccb0bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec44770454d862d13715225217505d12ffefd3f5a8dae0912a7ba8a9fbccb0bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d28180d5fc08486a1492239ab1107c9b3856334091da9d2ad7ae0e6086fbd00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d28180d5fc08486a1492239ab1107c9b3856334091da9d2ad7ae0e6086fbd00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79641e6aa81b861f7dd3dadd0cc23f677727909d85f017c761bcbbb19450b5e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79641e6aa81b861f7dd3dadd0cc23f677727909d85f017c761bcbbb19450b5e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f4ea02b7d74057eddb0946756236afdb129b5ad326b153d66357ee42caca78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f4ea02b7d74057eddb0946756236afdb129b5ad326b153d66357ee42caca78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcd939d8a8deab6a94b6df237bd627850f5bb37bd29bd0988d16452dcbd87c9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcd939d8a8deab6a94b6df237bd627850f5bb37bd29bd0988d16452dcbd87c9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wxvlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:41Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:41 crc kubenswrapper[4833]: I0217 13:46:41.117497 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a891e05e-5adf-4df4-9ad0-5d5326701e8e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08922642a7e0cf3749659b43cea0865de8509aa8c17e6137406830f1c897d6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0175a0e173dbfea894445c85d21d5bcd0d42cb4cfc8817d41ad2c466b61b96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0175a0e173dbfea894445c85d21d5bcd0d42cb4cfc8817d41ad2c466b61b96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:41Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:41 crc kubenswrapper[4833]: I0217 13:46:41.131395 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c1afc7-6071-464b-aedc-11234d869afe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e36c4174a69c22c608db515a3fe558ec0292c53ae56a43a945c8bb19bf4cdf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bb5bc7e6f182dd6271a6d16d8441e1f6833a32fd03858a3b91d16a896339c2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e477bf1f25d554946b3657a5bd18f922b18483902fd4f9c05a452a299cb4d920\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2260fe18b8f9f829f7dde85c9bdfc32895c8bfe48e0986c8f8205708e078ff37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:41Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:41 crc kubenswrapper[4833]: I0217 13:46:41.149582 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a207ffb6a17f606fde32eec5ae4222e2fdf02d79f61e4f959be55814c14daf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:41Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:41 crc kubenswrapper[4833]: I0217 13:46:41.164848 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:41Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:41 crc kubenswrapper[4833]: I0217 13:46:41.166633 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:41 crc kubenswrapper[4833]: I0217 13:46:41.166694 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:41 crc kubenswrapper[4833]: I0217 13:46:41.166712 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:41 crc kubenswrapper[4833]: I0217 13:46:41.166736 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:41 crc kubenswrapper[4833]: I0217 13:46:41.166755 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:41Z","lastTransitionTime":"2026-02-17T13:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:41 crc kubenswrapper[4833]: I0217 13:46:41.178107 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e5a2da13ff704ed5e89100bd3a88a6349d446c2af48f88acb399efb43dc42d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac53462f7a94ff04dc94b5065169da7ec43985a5a7e7dba779f23614b61b78fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:41Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:41 crc kubenswrapper[4833]: I0217 13:46:41.190677 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:41Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:41 crc kubenswrapper[4833]: I0217 13:46:41.204971 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:41Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:41 crc kubenswrapper[4833]: I0217 13:46:41.219920 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wlt4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efd76798e54bfcbad6d3a5f07396fe8579adcdb3d5bab3c303a9d31ad242e830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26d2bf39d36b6e8998e442f36f84198a363d7f22f56a05957a4a242459e6ff8b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T13:46:20Z\\\",\\\"message\\\":\\\"2026-02-17T13:45:34+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9e0f4ef6-02a5-40ec-920b-c377ffde18f5\\\\n2026-02-17T13:45:34+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9e0f4ef6-02a5-40ec-920b-c377ffde18f5 to /host/opt/cni/bin/\\\\n2026-02-17T13:45:35Z [verbose] multus-daemon started\\\\n2026-02-17T13:45:35Z [verbose] Readiness Indicator file check\\\\n2026-02-17T13:46:20Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9dxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wlt4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:41Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:41 crc kubenswrapper[4833]: I0217 13:46:41.252979 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72c5918a-056f-446c-b138-a1be7140a5b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0c4e266b444d8ac1a67f9b5a6705f4b871e3930daf7707cd30b589a4cce24ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a89a54d9bedad8afe45f34ce0bc2324d0ca8590c328aad9d1557b2d92e5b81fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5c74f469d2d07cfd265ee1f625b957efc0f9d1f0efd5f483c962abdfd66a080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ee6e033fa9b29f59b186e39ec83d162b40e36aab28bb1e568a54c351a8c0c79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9da7ba3cbbca328926f2372a7e581793373ec876c961f7988617c668d958b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42e357a961b833cb840c8b33ce0db65d57cc5fb37789b00a6bcd3eb67d8e33f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b53fbee4db34f53cf5d58fa925e793cae75e34d7aeac4b10518659e2b89e177b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b53fbee4db34f53cf5d58fa925e793cae75e34d7aeac4b10518659e2b89e177b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T13:46:26Z\\\",\\\"message\\\":\\\"e to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:26Z is after 2025-08-24T17:21:41Z]\\\\nI0217 13:46:26.957178 6879 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-vx9xx\\\\nI0217 13:46:26.957185 6879 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-vx9xx\\\\nI0217 13:46:26.957192 6879 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-vx9xx in node crc\\\\nI0217 13:46:26.957197 6879 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-vx9xx after 0 failed attempt(s)\\\\nI0217 13:46:26.957203 6879 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-vx9xx\\\\nI0217 13:46:26.957215 6879 obj_retry.go:303] Retry object setup: *\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:46:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-7r9gt_openshift-ovn-kubernetes(72c5918a-056f-446c-b138-a1be7140a5b0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fdc1a29a9f5b3be51beb5d28d7a09f671e4f9effeb9ed122da196da8623abe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7r9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:41Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:41 crc kubenswrapper[4833]: I0217 13:46:41.269274 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:41 crc kubenswrapper[4833]: I0217 13:46:41.270005 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:41 crc kubenswrapper[4833]: I0217 13:46:41.270073 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:41 crc kubenswrapper[4833]: I0217 13:46:41.270106 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:41 crc kubenswrapper[4833]: I0217 13:46:41.270128 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:41Z","lastTransitionTime":"2026-02-17T13:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:41 crc kubenswrapper[4833]: I0217 13:46:41.271108 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23fd4be9-debc-405d-ac05-5a5160593231\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f18bb5a144ef41de921b7b92ee560b76e3f9f9a626be4d5638db46e4688d256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84373535efe637bbd5cf8649523941f0e73ad51f06dd45c913741e2a8e7b775f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090ad37f43c1a891b1a2922b2a367e1b52ce56fd6a53e9e749f819bf281136a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://436df1e343e5ca9347dd98b9da03c74816b0bc0ec6cecd94dc3b21c38eea9b51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babf66dd8ad93003e9766e885c7626d4f6dab5bc46a6d714449a01065d7afea9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"message\\\":\\\"W0217 13:45:14.176886 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 13:45:14.177326 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771335914 cert, and key in /tmp/serving-cert-2529326345/serving-signer.crt, /tmp/serving-cert-2529326345/serving-signer.key\\\\nI0217 13:45:14.372413 1 observer_polling.go:159] Starting file observer\\\\nW0217 13:45:14.378661 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 13:45:14.378888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:45:14.382093 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529326345/tls.crt::/tmp/serving-cert-2529326345/tls.key\\\\\\\"\\\\nF0217 13:45:14.604297 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0450fd6122e12877e07084f693f67e63ac70bcecc962f7da5a32241ba14553\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:41Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:41 crc kubenswrapper[4833]: I0217 13:46:41.301341 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e08f74a-75d7-4dce-9297-27140e91962a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a2013b70df34194f895ea7c2e5f2650966fa2d06518733fa4411893e5b2b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbc961a623ca1f4237f238856a41acfb724aadd5fa3feda70410c352b6f750c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80bdd44d3b326c9bb61f21b221c63d4647e2cc6edc95fe568b90e76f4329cf1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9bc54d1057546b9c8d7d1527bb2ae93c1c713b4d6e5879477157af90fc0d89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e88eaef317b2beb0c55811634e6752439b7c7725072f55a42f5ab49c2ac2385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:41Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:41 crc kubenswrapper[4833]: I0217 13:46:41.317732 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5991f03-6fb7-4425-b041-26d760e380ae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae2cc390253214c6979cba64584e59e0342c1f750b16c569acfd885cb6b36c31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9702bc05160617e0c8ac8fd3d9a81244b5be2bf955ef58f9408fc0b42bea6609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5529e4fe84d20c564ff55b47e32df67ca6aac40d2629a6b5bf96e03a64b79676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e89d83530ec38b869e22ff9125ce8e680c2a274e471aedd447c9daee233acb1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e89d83530ec38b869e22ff9125ce8e680c2a274e471aedd447c9daee233acb1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:41Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:41 crc kubenswrapper[4833]: I0217 13:46:41.372958 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:41 crc kubenswrapper[4833]: I0217 13:46:41.373002 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:41 crc kubenswrapper[4833]: I0217 13:46:41.373016 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:41 crc kubenswrapper[4833]: I0217 13:46:41.373057 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:41 crc kubenswrapper[4833]: I0217 13:46:41.373073 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:41Z","lastTransitionTime":"2026-02-17T13:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:41 crc kubenswrapper[4833]: I0217 13:46:41.374399 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5gxjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03c9f579-21ad-4977-a5e8-db9272a08557\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7400b87694dd74f41819e23ce3f617021e028cdcbfdad65b2213249f35d176d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s6vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5gxjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:41Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:41 crc kubenswrapper[4833]: I0217 13:46:41.386661 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xwfsh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"962a90ec-217a-4df0-8d83-2e6663953088\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cf73503bb300f6f71385f8da17c04a4b5d84691d5d7ca3b363b2335e4905c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srrv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc181ab232df3bc23da46eabfb43d64f7c4e674152c27338d8ff5faf1c477bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srrv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xwfsh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:41Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:41 crc kubenswrapper[4833]: I0217 13:46:41.395774 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4b7xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft275\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft275\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4b7xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:41Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:41 crc kubenswrapper[4833]: I0217 13:46:41.476291 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:41 crc kubenswrapper[4833]: I0217 13:46:41.476359 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:41 crc kubenswrapper[4833]: I0217 13:46:41.476378 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:41 crc kubenswrapper[4833]: I0217 13:46:41.476401 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:41 crc kubenswrapper[4833]: I0217 13:46:41.476418 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:41Z","lastTransitionTime":"2026-02-17T13:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:41 crc kubenswrapper[4833]: I0217 13:46:41.579652 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:41 crc kubenswrapper[4833]: I0217 13:46:41.579711 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:41 crc kubenswrapper[4833]: I0217 13:46:41.579728 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:41 crc kubenswrapper[4833]: I0217 13:46:41.579753 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:41 crc kubenswrapper[4833]: I0217 13:46:41.579773 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:41Z","lastTransitionTime":"2026-02-17T13:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:41 crc kubenswrapper[4833]: I0217 13:46:41.683437 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:41 crc kubenswrapper[4833]: I0217 13:46:41.683558 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:41 crc kubenswrapper[4833]: I0217 13:46:41.683580 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:41 crc kubenswrapper[4833]: I0217 13:46:41.683609 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:41 crc kubenswrapper[4833]: I0217 13:46:41.683628 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:41Z","lastTransitionTime":"2026-02-17T13:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:41 crc kubenswrapper[4833]: I0217 13:46:41.786698 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:41 crc kubenswrapper[4833]: I0217 13:46:41.786776 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:41 crc kubenswrapper[4833]: I0217 13:46:41.786799 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:41 crc kubenswrapper[4833]: I0217 13:46:41.786825 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:41 crc kubenswrapper[4833]: I0217 13:46:41.786843 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:41Z","lastTransitionTime":"2026-02-17T13:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:41 crc kubenswrapper[4833]: I0217 13:46:41.889007 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:41 crc kubenswrapper[4833]: I0217 13:46:41.889063 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:41 crc kubenswrapper[4833]: I0217 13:46:41.889075 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:41 crc kubenswrapper[4833]: I0217 13:46:41.889094 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:41 crc kubenswrapper[4833]: I0217 13:46:41.889109 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:41Z","lastTransitionTime":"2026-02-17T13:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:41 crc kubenswrapper[4833]: I0217 13:46:41.992296 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:41 crc kubenswrapper[4833]: I0217 13:46:41.992368 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:41 crc kubenswrapper[4833]: I0217 13:46:41.992385 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:41 crc kubenswrapper[4833]: I0217 13:46:41.992409 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:41 crc kubenswrapper[4833]: I0217 13:46:41.992428 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:41Z","lastTransitionTime":"2026-02-17T13:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:42 crc kubenswrapper[4833]: I0217 13:46:42.040841 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b7xf" Feb 17 13:46:42 crc kubenswrapper[4833]: I0217 13:46:42.040879 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:46:42 crc kubenswrapper[4833]: I0217 13:46:42.040858 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:46:42 crc kubenswrapper[4833]: E0217 13:46:42.041100 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b7xf" podUID="892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c" Feb 17 13:46:42 crc kubenswrapper[4833]: E0217 13:46:42.041327 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:46:42 crc kubenswrapper[4833]: E0217 13:46:42.041445 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:46:42 crc kubenswrapper[4833]: I0217 13:46:42.045095 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 18:53:01.107823262 +0000 UTC Feb 17 13:46:42 crc kubenswrapper[4833]: I0217 13:46:42.095556 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:42 crc kubenswrapper[4833]: I0217 13:46:42.095671 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:42 crc kubenswrapper[4833]: I0217 13:46:42.095700 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:42 crc kubenswrapper[4833]: I0217 13:46:42.095735 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:42 crc kubenswrapper[4833]: I0217 13:46:42.095760 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:42Z","lastTransitionTime":"2026-02-17T13:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:42 crc kubenswrapper[4833]: I0217 13:46:42.198607 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:42 crc kubenswrapper[4833]: I0217 13:46:42.198670 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:42 crc kubenswrapper[4833]: I0217 13:46:42.198686 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:42 crc kubenswrapper[4833]: I0217 13:46:42.198707 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:42 crc kubenswrapper[4833]: I0217 13:46:42.198720 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:42Z","lastTransitionTime":"2026-02-17T13:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:42 crc kubenswrapper[4833]: I0217 13:46:42.301832 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:42 crc kubenswrapper[4833]: I0217 13:46:42.301899 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:42 crc kubenswrapper[4833]: I0217 13:46:42.301916 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:42 crc kubenswrapper[4833]: I0217 13:46:42.301940 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:42 crc kubenswrapper[4833]: I0217 13:46:42.301958 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:42Z","lastTransitionTime":"2026-02-17T13:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:42 crc kubenswrapper[4833]: I0217 13:46:42.404547 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:42 crc kubenswrapper[4833]: I0217 13:46:42.404665 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:42 crc kubenswrapper[4833]: I0217 13:46:42.404689 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:42 crc kubenswrapper[4833]: I0217 13:46:42.404719 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:42 crc kubenswrapper[4833]: I0217 13:46:42.404740 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:42Z","lastTransitionTime":"2026-02-17T13:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:42 crc kubenswrapper[4833]: I0217 13:46:42.507085 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:42 crc kubenswrapper[4833]: I0217 13:46:42.507144 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:42 crc kubenswrapper[4833]: I0217 13:46:42.507161 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:42 crc kubenswrapper[4833]: I0217 13:46:42.507186 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:42 crc kubenswrapper[4833]: I0217 13:46:42.507203 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:42Z","lastTransitionTime":"2026-02-17T13:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:42 crc kubenswrapper[4833]: I0217 13:46:42.609646 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:42 crc kubenswrapper[4833]: I0217 13:46:42.609713 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:42 crc kubenswrapper[4833]: I0217 13:46:42.609727 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:42 crc kubenswrapper[4833]: I0217 13:46:42.609750 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:42 crc kubenswrapper[4833]: I0217 13:46:42.609787 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:42Z","lastTransitionTime":"2026-02-17T13:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:42 crc kubenswrapper[4833]: I0217 13:46:42.713586 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:42 crc kubenswrapper[4833]: I0217 13:46:42.713665 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:42 crc kubenswrapper[4833]: I0217 13:46:42.713685 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:42 crc kubenswrapper[4833]: I0217 13:46:42.713714 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:42 crc kubenswrapper[4833]: I0217 13:46:42.713735 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:42Z","lastTransitionTime":"2026-02-17T13:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:42 crc kubenswrapper[4833]: I0217 13:46:42.816991 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:42 crc kubenswrapper[4833]: I0217 13:46:42.817089 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:42 crc kubenswrapper[4833]: I0217 13:46:42.817108 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:42 crc kubenswrapper[4833]: I0217 13:46:42.817133 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:42 crc kubenswrapper[4833]: I0217 13:46:42.817153 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:42Z","lastTransitionTime":"2026-02-17T13:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:42 crc kubenswrapper[4833]: I0217 13:46:42.920402 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:42 crc kubenswrapper[4833]: I0217 13:46:42.920471 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:42 crc kubenswrapper[4833]: I0217 13:46:42.920524 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:42 crc kubenswrapper[4833]: I0217 13:46:42.920553 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:42 crc kubenswrapper[4833]: I0217 13:46:42.920573 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:42Z","lastTransitionTime":"2026-02-17T13:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:43 crc kubenswrapper[4833]: I0217 13:46:43.023834 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:43 crc kubenswrapper[4833]: I0217 13:46:43.023893 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:43 crc kubenswrapper[4833]: I0217 13:46:43.023916 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:43 crc kubenswrapper[4833]: I0217 13:46:43.023943 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:43 crc kubenswrapper[4833]: I0217 13:46:43.023962 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:43Z","lastTransitionTime":"2026-02-17T13:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:43 crc kubenswrapper[4833]: I0217 13:46:43.040860 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:46:43 crc kubenswrapper[4833]: E0217 13:46:43.041108 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:46:43 crc kubenswrapper[4833]: I0217 13:46:43.045404 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 18:29:00.412156408 +0000 UTC Feb 17 13:46:43 crc kubenswrapper[4833]: I0217 13:46:43.127705 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:43 crc kubenswrapper[4833]: I0217 13:46:43.127761 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:43 crc kubenswrapper[4833]: I0217 13:46:43.127778 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:43 crc kubenswrapper[4833]: I0217 13:46:43.127799 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:43 crc kubenswrapper[4833]: I0217 13:46:43.127818 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:43Z","lastTransitionTime":"2026-02-17T13:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:43 crc kubenswrapper[4833]: I0217 13:46:43.230772 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:43 crc kubenswrapper[4833]: I0217 13:46:43.230837 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:43 crc kubenswrapper[4833]: I0217 13:46:43.230854 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:43 crc kubenswrapper[4833]: I0217 13:46:43.230877 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:43 crc kubenswrapper[4833]: I0217 13:46:43.230894 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:43Z","lastTransitionTime":"2026-02-17T13:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:43 crc kubenswrapper[4833]: I0217 13:46:43.334364 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:43 crc kubenswrapper[4833]: I0217 13:46:43.334417 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:43 crc kubenswrapper[4833]: I0217 13:46:43.334438 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:43 crc kubenswrapper[4833]: I0217 13:46:43.334464 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:43 crc kubenswrapper[4833]: I0217 13:46:43.334482 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:43Z","lastTransitionTime":"2026-02-17T13:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:43 crc kubenswrapper[4833]: I0217 13:46:43.437881 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:43 crc kubenswrapper[4833]: I0217 13:46:43.437937 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:43 crc kubenswrapper[4833]: I0217 13:46:43.437955 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:43 crc kubenswrapper[4833]: I0217 13:46:43.437978 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:43 crc kubenswrapper[4833]: I0217 13:46:43.437996 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:43Z","lastTransitionTime":"2026-02-17T13:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:43 crc kubenswrapper[4833]: I0217 13:46:43.541251 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:43 crc kubenswrapper[4833]: I0217 13:46:43.541342 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:43 crc kubenswrapper[4833]: I0217 13:46:43.541366 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:43 crc kubenswrapper[4833]: I0217 13:46:43.541398 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:43 crc kubenswrapper[4833]: I0217 13:46:43.541422 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:43Z","lastTransitionTime":"2026-02-17T13:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:43 crc kubenswrapper[4833]: I0217 13:46:43.644652 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:43 crc kubenswrapper[4833]: I0217 13:46:43.644702 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:43 crc kubenswrapper[4833]: I0217 13:46:43.644713 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:43 crc kubenswrapper[4833]: I0217 13:46:43.644731 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:43 crc kubenswrapper[4833]: I0217 13:46:43.644747 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:43Z","lastTransitionTime":"2026-02-17T13:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:43 crc kubenswrapper[4833]: I0217 13:46:43.747931 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:43 crc kubenswrapper[4833]: I0217 13:46:43.748007 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:43 crc kubenswrapper[4833]: I0217 13:46:43.748072 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:43 crc kubenswrapper[4833]: I0217 13:46:43.748116 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:43 crc kubenswrapper[4833]: I0217 13:46:43.748138 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:43Z","lastTransitionTime":"2026-02-17T13:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:43 crc kubenswrapper[4833]: I0217 13:46:43.850565 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:43 crc kubenswrapper[4833]: I0217 13:46:43.850629 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:43 crc kubenswrapper[4833]: I0217 13:46:43.850666 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:43 crc kubenswrapper[4833]: I0217 13:46:43.850704 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:43 crc kubenswrapper[4833]: I0217 13:46:43.850726 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:43Z","lastTransitionTime":"2026-02-17T13:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:43 crc kubenswrapper[4833]: I0217 13:46:43.953783 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:43 crc kubenswrapper[4833]: I0217 13:46:43.953847 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:43 crc kubenswrapper[4833]: I0217 13:46:43.953863 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:43 crc kubenswrapper[4833]: I0217 13:46:43.953889 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:43 crc kubenswrapper[4833]: I0217 13:46:43.953908 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:43Z","lastTransitionTime":"2026-02-17T13:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:44 crc kubenswrapper[4833]: I0217 13:46:44.041133 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:46:44 crc kubenswrapper[4833]: I0217 13:46:44.041246 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:46:44 crc kubenswrapper[4833]: I0217 13:46:44.041246 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b7xf" Feb 17 13:46:44 crc kubenswrapper[4833]: E0217 13:46:44.041457 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:46:44 crc kubenswrapper[4833]: E0217 13:46:44.041591 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:46:44 crc kubenswrapper[4833]: E0217 13:46:44.041763 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b7xf" podUID="892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c" Feb 17 13:46:44 crc kubenswrapper[4833]: I0217 13:46:44.046477 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 05:06:17.620384122 +0000 UTC Feb 17 13:46:44 crc kubenswrapper[4833]: I0217 13:46:44.057723 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:44 crc kubenswrapper[4833]: I0217 13:46:44.057804 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:44 crc kubenswrapper[4833]: I0217 13:46:44.057829 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:44 crc kubenswrapper[4833]: I0217 13:46:44.057859 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:44 crc kubenswrapper[4833]: I0217 13:46:44.057881 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:44Z","lastTransitionTime":"2026-02-17T13:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:44 crc kubenswrapper[4833]: I0217 13:46:44.161238 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:44 crc kubenswrapper[4833]: I0217 13:46:44.161341 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:44 crc kubenswrapper[4833]: I0217 13:46:44.161364 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:44 crc kubenswrapper[4833]: I0217 13:46:44.161396 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:44 crc kubenswrapper[4833]: I0217 13:46:44.161415 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:44Z","lastTransitionTime":"2026-02-17T13:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:44 crc kubenswrapper[4833]: I0217 13:46:44.264197 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:44 crc kubenswrapper[4833]: I0217 13:46:44.264266 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:44 crc kubenswrapper[4833]: I0217 13:46:44.264284 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:44 crc kubenswrapper[4833]: I0217 13:46:44.264309 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:44 crc kubenswrapper[4833]: I0217 13:46:44.264326 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:44Z","lastTransitionTime":"2026-02-17T13:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:44 crc kubenswrapper[4833]: I0217 13:46:44.367679 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:44 crc kubenswrapper[4833]: I0217 13:46:44.367754 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:44 crc kubenswrapper[4833]: I0217 13:46:44.367777 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:44 crc kubenswrapper[4833]: I0217 13:46:44.367808 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:44 crc kubenswrapper[4833]: I0217 13:46:44.367833 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:44Z","lastTransitionTime":"2026-02-17T13:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:44 crc kubenswrapper[4833]: I0217 13:46:44.470321 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:44 crc kubenswrapper[4833]: I0217 13:46:44.470390 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:44 crc kubenswrapper[4833]: I0217 13:46:44.470421 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:44 crc kubenswrapper[4833]: I0217 13:46:44.470445 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:44 crc kubenswrapper[4833]: I0217 13:46:44.470463 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:44Z","lastTransitionTime":"2026-02-17T13:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:44 crc kubenswrapper[4833]: I0217 13:46:44.573668 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:44 crc kubenswrapper[4833]: I0217 13:46:44.573723 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:44 crc kubenswrapper[4833]: I0217 13:46:44.573739 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:44 crc kubenswrapper[4833]: I0217 13:46:44.573761 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:44 crc kubenswrapper[4833]: I0217 13:46:44.573779 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:44Z","lastTransitionTime":"2026-02-17T13:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:44 crc kubenswrapper[4833]: I0217 13:46:44.676950 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:44 crc kubenswrapper[4833]: I0217 13:46:44.677473 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:44 crc kubenswrapper[4833]: I0217 13:46:44.677509 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:44 crc kubenswrapper[4833]: I0217 13:46:44.677533 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:44 crc kubenswrapper[4833]: I0217 13:46:44.677550 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:44Z","lastTransitionTime":"2026-02-17T13:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:44 crc kubenswrapper[4833]: I0217 13:46:44.780139 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:44 crc kubenswrapper[4833]: I0217 13:46:44.780207 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:44 crc kubenswrapper[4833]: I0217 13:46:44.780224 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:44 crc kubenswrapper[4833]: I0217 13:46:44.780251 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:44 crc kubenswrapper[4833]: I0217 13:46:44.780269 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:44Z","lastTransitionTime":"2026-02-17T13:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:44 crc kubenswrapper[4833]: I0217 13:46:44.884076 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:44 crc kubenswrapper[4833]: I0217 13:46:44.884140 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:44 crc kubenswrapper[4833]: I0217 13:46:44.884162 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:44 crc kubenswrapper[4833]: I0217 13:46:44.884192 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:44 crc kubenswrapper[4833]: I0217 13:46:44.884216 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:44Z","lastTransitionTime":"2026-02-17T13:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:44 crc kubenswrapper[4833]: I0217 13:46:44.987583 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:44 crc kubenswrapper[4833]: I0217 13:46:44.987629 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:44 crc kubenswrapper[4833]: I0217 13:46:44.987640 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:44 crc kubenswrapper[4833]: I0217 13:46:44.987657 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:44 crc kubenswrapper[4833]: I0217 13:46:44.987669 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:44Z","lastTransitionTime":"2026-02-17T13:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:45 crc kubenswrapper[4833]: I0217 13:46:45.041317 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:46:45 crc kubenswrapper[4833]: E0217 13:46:45.041516 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:46:45 crc kubenswrapper[4833]: I0217 13:46:45.047238 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 06:47:36.288298474 +0000 UTC Feb 17 13:46:45 crc kubenswrapper[4833]: I0217 13:46:45.090527 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:45 crc kubenswrapper[4833]: I0217 13:46:45.090582 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:45 crc kubenswrapper[4833]: I0217 13:46:45.090599 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:45 crc kubenswrapper[4833]: I0217 13:46:45.090622 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:45 crc kubenswrapper[4833]: I0217 13:46:45.090638 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:45Z","lastTransitionTime":"2026-02-17T13:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:45 crc kubenswrapper[4833]: I0217 13:46:45.193938 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:45 crc kubenswrapper[4833]: I0217 13:46:45.193980 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:45 crc kubenswrapper[4833]: I0217 13:46:45.193996 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:45 crc kubenswrapper[4833]: I0217 13:46:45.194022 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:45 crc kubenswrapper[4833]: I0217 13:46:45.194067 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:45Z","lastTransitionTime":"2026-02-17T13:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:45 crc kubenswrapper[4833]: I0217 13:46:45.297117 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:45 crc kubenswrapper[4833]: I0217 13:46:45.297165 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:45 crc kubenswrapper[4833]: I0217 13:46:45.297181 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:45 crc kubenswrapper[4833]: I0217 13:46:45.297203 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:45 crc kubenswrapper[4833]: I0217 13:46:45.297220 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:45Z","lastTransitionTime":"2026-02-17T13:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:45 crc kubenswrapper[4833]: I0217 13:46:45.400441 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:45 crc kubenswrapper[4833]: I0217 13:46:45.400502 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:45 crc kubenswrapper[4833]: I0217 13:46:45.400524 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:45 crc kubenswrapper[4833]: I0217 13:46:45.400551 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:45 crc kubenswrapper[4833]: I0217 13:46:45.400572 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:45Z","lastTransitionTime":"2026-02-17T13:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:45 crc kubenswrapper[4833]: I0217 13:46:45.503511 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:45 crc kubenswrapper[4833]: I0217 13:46:45.503566 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:45 crc kubenswrapper[4833]: I0217 13:46:45.503603 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:45 crc kubenswrapper[4833]: I0217 13:46:45.503634 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:45 crc kubenswrapper[4833]: I0217 13:46:45.503656 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:45Z","lastTransitionTime":"2026-02-17T13:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:45 crc kubenswrapper[4833]: I0217 13:46:45.606681 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:45 crc kubenswrapper[4833]: I0217 13:46:45.606734 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:45 crc kubenswrapper[4833]: I0217 13:46:45.606750 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:45 crc kubenswrapper[4833]: I0217 13:46:45.606772 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:45 crc kubenswrapper[4833]: I0217 13:46:45.606788 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:45Z","lastTransitionTime":"2026-02-17T13:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:45 crc kubenswrapper[4833]: I0217 13:46:45.710621 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:45 crc kubenswrapper[4833]: I0217 13:46:45.710661 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:45 crc kubenswrapper[4833]: I0217 13:46:45.710670 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:45 crc kubenswrapper[4833]: I0217 13:46:45.710683 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:45 crc kubenswrapper[4833]: I0217 13:46:45.710692 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:45Z","lastTransitionTime":"2026-02-17T13:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:45 crc kubenswrapper[4833]: I0217 13:46:45.813204 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:45 crc kubenswrapper[4833]: I0217 13:46:45.813275 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:45 crc kubenswrapper[4833]: I0217 13:46:45.813314 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:45 crc kubenswrapper[4833]: I0217 13:46:45.813346 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:45 crc kubenswrapper[4833]: I0217 13:46:45.813366 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:45Z","lastTransitionTime":"2026-02-17T13:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:45 crc kubenswrapper[4833]: I0217 13:46:45.916903 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:45 crc kubenswrapper[4833]: I0217 13:46:45.916952 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:45 crc kubenswrapper[4833]: I0217 13:46:45.916964 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:45 crc kubenswrapper[4833]: I0217 13:46:45.916985 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:45 crc kubenswrapper[4833]: I0217 13:46:45.916998 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:45Z","lastTransitionTime":"2026-02-17T13:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:46 crc kubenswrapper[4833]: I0217 13:46:46.020640 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:46 crc kubenswrapper[4833]: I0217 13:46:46.020709 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:46 crc kubenswrapper[4833]: I0217 13:46:46.020726 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:46 crc kubenswrapper[4833]: I0217 13:46:46.020748 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:46 crc kubenswrapper[4833]: I0217 13:46:46.020766 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:46Z","lastTransitionTime":"2026-02-17T13:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:46 crc kubenswrapper[4833]: I0217 13:46:46.041184 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b7xf" Feb 17 13:46:46 crc kubenswrapper[4833]: I0217 13:46:46.041533 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:46:46 crc kubenswrapper[4833]: I0217 13:46:46.041182 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:46:46 crc kubenswrapper[4833]: E0217 13:46:46.041637 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b7xf" podUID="892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c" Feb 17 13:46:46 crc kubenswrapper[4833]: E0217 13:46:46.042031 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:46:46 crc kubenswrapper[4833]: E0217 13:46:46.042174 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:46:46 crc kubenswrapper[4833]: I0217 13:46:46.047770 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 12:28:24.67206012 +0000 UTC Feb 17 13:46:46 crc kubenswrapper[4833]: I0217 13:46:46.123288 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:46 crc kubenswrapper[4833]: I0217 13:46:46.123356 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:46 crc kubenswrapper[4833]: I0217 13:46:46.123374 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:46 crc kubenswrapper[4833]: I0217 13:46:46.123402 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:46 crc kubenswrapper[4833]: I0217 13:46:46.123422 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:46Z","lastTransitionTime":"2026-02-17T13:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:46 crc kubenswrapper[4833]: I0217 13:46:46.226894 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:46 crc kubenswrapper[4833]: I0217 13:46:46.226971 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:46 crc kubenswrapper[4833]: I0217 13:46:46.226997 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:46 crc kubenswrapper[4833]: I0217 13:46:46.227027 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:46 crc kubenswrapper[4833]: I0217 13:46:46.227103 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:46Z","lastTransitionTime":"2026-02-17T13:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:46 crc kubenswrapper[4833]: I0217 13:46:46.329936 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:46 crc kubenswrapper[4833]: I0217 13:46:46.330026 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:46 crc kubenswrapper[4833]: I0217 13:46:46.330089 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:46 crc kubenswrapper[4833]: I0217 13:46:46.330119 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:46 crc kubenswrapper[4833]: I0217 13:46:46.330151 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:46Z","lastTransitionTime":"2026-02-17T13:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:46 crc kubenswrapper[4833]: I0217 13:46:46.433177 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:46 crc kubenswrapper[4833]: I0217 13:46:46.433236 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:46 crc kubenswrapper[4833]: I0217 13:46:46.433253 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:46 crc kubenswrapper[4833]: I0217 13:46:46.433276 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:46 crc kubenswrapper[4833]: I0217 13:46:46.433292 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:46Z","lastTransitionTime":"2026-02-17T13:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:46 crc kubenswrapper[4833]: I0217 13:46:46.536607 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:46 crc kubenswrapper[4833]: I0217 13:46:46.536666 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:46 crc kubenswrapper[4833]: I0217 13:46:46.536676 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:46 crc kubenswrapper[4833]: I0217 13:46:46.536691 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:46 crc kubenswrapper[4833]: I0217 13:46:46.536700 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:46Z","lastTransitionTime":"2026-02-17T13:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:46 crc kubenswrapper[4833]: I0217 13:46:46.639967 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:46 crc kubenswrapper[4833]: I0217 13:46:46.640011 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:46 crc kubenswrapper[4833]: I0217 13:46:46.640026 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:46 crc kubenswrapper[4833]: I0217 13:46:46.640067 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:46 crc kubenswrapper[4833]: I0217 13:46:46.640084 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:46Z","lastTransitionTime":"2026-02-17T13:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:46 crc kubenswrapper[4833]: I0217 13:46:46.743520 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:46 crc kubenswrapper[4833]: I0217 13:46:46.743573 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:46 crc kubenswrapper[4833]: I0217 13:46:46.743585 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:46 crc kubenswrapper[4833]: I0217 13:46:46.743602 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:46 crc kubenswrapper[4833]: I0217 13:46:46.743614 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:46Z","lastTransitionTime":"2026-02-17T13:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:46 crc kubenswrapper[4833]: I0217 13:46:46.847076 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:46 crc kubenswrapper[4833]: I0217 13:46:46.847156 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:46 crc kubenswrapper[4833]: I0217 13:46:46.847176 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:46 crc kubenswrapper[4833]: I0217 13:46:46.847195 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:46 crc kubenswrapper[4833]: I0217 13:46:46.847239 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:46Z","lastTransitionTime":"2026-02-17T13:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:46 crc kubenswrapper[4833]: I0217 13:46:46.950678 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:46 crc kubenswrapper[4833]: I0217 13:46:46.950719 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:46 crc kubenswrapper[4833]: I0217 13:46:46.950729 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:46 crc kubenswrapper[4833]: I0217 13:46:46.950743 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:46 crc kubenswrapper[4833]: I0217 13:46:46.950771 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:46Z","lastTransitionTime":"2026-02-17T13:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:47 crc kubenswrapper[4833]: I0217 13:46:47.041179 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:46:47 crc kubenswrapper[4833]: E0217 13:46:47.041364 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:46:47 crc kubenswrapper[4833]: I0217 13:46:47.048323 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 13:31:53.581887551 +0000 UTC Feb 17 13:46:47 crc kubenswrapper[4833]: I0217 13:46:47.059797 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:47 crc kubenswrapper[4833]: I0217 13:46:47.059863 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:47 crc kubenswrapper[4833]: I0217 13:46:47.059878 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:47 crc kubenswrapper[4833]: I0217 13:46:47.059895 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:47 crc kubenswrapper[4833]: I0217 13:46:47.059906 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:47Z","lastTransitionTime":"2026-02-17T13:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:47 crc kubenswrapper[4833]: I0217 13:46:47.162276 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:47 crc kubenswrapper[4833]: I0217 13:46:47.162319 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:47 crc kubenswrapper[4833]: I0217 13:46:47.162329 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:47 crc kubenswrapper[4833]: I0217 13:46:47.162342 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:47 crc kubenswrapper[4833]: I0217 13:46:47.162352 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:47Z","lastTransitionTime":"2026-02-17T13:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:47 crc kubenswrapper[4833]: I0217 13:46:47.266118 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:47 crc kubenswrapper[4833]: I0217 13:46:47.266204 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:47 crc kubenswrapper[4833]: I0217 13:46:47.266223 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:47 crc kubenswrapper[4833]: I0217 13:46:47.266254 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:47 crc kubenswrapper[4833]: I0217 13:46:47.266276 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:47Z","lastTransitionTime":"2026-02-17T13:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:47 crc kubenswrapper[4833]: I0217 13:46:47.369793 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:47 crc kubenswrapper[4833]: I0217 13:46:47.369892 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:47 crc kubenswrapper[4833]: I0217 13:46:47.369915 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:47 crc kubenswrapper[4833]: I0217 13:46:47.369950 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:47 crc kubenswrapper[4833]: I0217 13:46:47.369975 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:47Z","lastTransitionTime":"2026-02-17T13:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:47 crc kubenswrapper[4833]: I0217 13:46:47.473729 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:47 crc kubenswrapper[4833]: I0217 13:46:47.473816 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:47 crc kubenswrapper[4833]: I0217 13:46:47.473845 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:47 crc kubenswrapper[4833]: I0217 13:46:47.473877 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:47 crc kubenswrapper[4833]: I0217 13:46:47.473901 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:47Z","lastTransitionTime":"2026-02-17T13:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:47 crc kubenswrapper[4833]: I0217 13:46:47.577344 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:47 crc kubenswrapper[4833]: I0217 13:46:47.577390 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:47 crc kubenswrapper[4833]: I0217 13:46:47.577457 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:47 crc kubenswrapper[4833]: I0217 13:46:47.577476 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:47 crc kubenswrapper[4833]: I0217 13:46:47.577486 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:47Z","lastTransitionTime":"2026-02-17T13:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:47 crc kubenswrapper[4833]: I0217 13:46:47.681648 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:47 crc kubenswrapper[4833]: I0217 13:46:47.681721 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:47 crc kubenswrapper[4833]: I0217 13:46:47.681745 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:47 crc kubenswrapper[4833]: I0217 13:46:47.681775 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:47 crc kubenswrapper[4833]: I0217 13:46:47.681796 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:47Z","lastTransitionTime":"2026-02-17T13:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:47 crc kubenswrapper[4833]: I0217 13:46:47.784628 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:47 crc kubenswrapper[4833]: I0217 13:46:47.784715 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:47 crc kubenswrapper[4833]: I0217 13:46:47.784738 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:47 crc kubenswrapper[4833]: I0217 13:46:47.784762 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:47 crc kubenswrapper[4833]: I0217 13:46:47.784780 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:47Z","lastTransitionTime":"2026-02-17T13:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:47 crc kubenswrapper[4833]: I0217 13:46:47.887199 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:47 crc kubenswrapper[4833]: I0217 13:46:47.887272 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:47 crc kubenswrapper[4833]: I0217 13:46:47.887306 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:47 crc kubenswrapper[4833]: I0217 13:46:47.887337 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:47 crc kubenswrapper[4833]: I0217 13:46:47.887360 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:47Z","lastTransitionTime":"2026-02-17T13:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:47 crc kubenswrapper[4833]: I0217 13:46:47.990364 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:47 crc kubenswrapper[4833]: I0217 13:46:47.990471 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:47 crc kubenswrapper[4833]: I0217 13:46:47.990491 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:47 crc kubenswrapper[4833]: I0217 13:46:47.990516 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:47 crc kubenswrapper[4833]: I0217 13:46:47.990533 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:47Z","lastTransitionTime":"2026-02-17T13:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:48 crc kubenswrapper[4833]: I0217 13:46:48.040902 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:46:48 crc kubenswrapper[4833]: I0217 13:46:48.041007 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b7xf" Feb 17 13:46:48 crc kubenswrapper[4833]: I0217 13:46:48.040902 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:46:48 crc kubenswrapper[4833]: E0217 13:46:48.041155 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:46:48 crc kubenswrapper[4833]: E0217 13:46:48.041329 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:46:48 crc kubenswrapper[4833]: E0217 13:46:48.041515 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b7xf" podUID="892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c" Feb 17 13:46:48 crc kubenswrapper[4833]: I0217 13:46:48.049210 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 07:36:47.810381443 +0000 UTC Feb 17 13:46:48 crc kubenswrapper[4833]: I0217 13:46:48.094136 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:48 crc kubenswrapper[4833]: I0217 13:46:48.094194 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:48 crc kubenswrapper[4833]: I0217 13:46:48.094211 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:48 crc kubenswrapper[4833]: I0217 13:46:48.094236 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:48 crc kubenswrapper[4833]: I0217 13:46:48.094254 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:48Z","lastTransitionTime":"2026-02-17T13:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:48 crc kubenswrapper[4833]: I0217 13:46:48.196846 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:48 crc kubenswrapper[4833]: I0217 13:46:48.196902 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:48 crc kubenswrapper[4833]: I0217 13:46:48.196921 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:48 crc kubenswrapper[4833]: I0217 13:46:48.196943 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:48 crc kubenswrapper[4833]: I0217 13:46:48.196960 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:48Z","lastTransitionTime":"2026-02-17T13:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:48 crc kubenswrapper[4833]: I0217 13:46:48.257382 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:48 crc kubenswrapper[4833]: I0217 13:46:48.257482 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:48 crc kubenswrapper[4833]: I0217 13:46:48.257522 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:48 crc kubenswrapper[4833]: I0217 13:46:48.257558 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:48 crc kubenswrapper[4833]: I0217 13:46:48.257581 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:48Z","lastTransitionTime":"2026-02-17T13:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:48 crc kubenswrapper[4833]: E0217 13:46:48.279557 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"858a93ea-15f0-4bac-8fa3-badb79f68871\\\",\\\"systemUUID\\\":\\\"648b67a2-27e7-447a-8ad2-7acc2e737df4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:48Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:48 crc kubenswrapper[4833]: I0217 13:46:48.285021 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:48 crc kubenswrapper[4833]: I0217 13:46:48.285106 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:48 crc kubenswrapper[4833]: I0217 13:46:48.285123 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:48 crc kubenswrapper[4833]: I0217 13:46:48.285145 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:48 crc kubenswrapper[4833]: I0217 13:46:48.285162 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:48Z","lastTransitionTime":"2026-02-17T13:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:48 crc kubenswrapper[4833]: E0217 13:46:48.303861 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"858a93ea-15f0-4bac-8fa3-badb79f68871\\\",\\\"systemUUID\\\":\\\"648b67a2-27e7-447a-8ad2-7acc2e737df4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:48Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:48 crc kubenswrapper[4833]: I0217 13:46:48.308760 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:48 crc kubenswrapper[4833]: I0217 13:46:48.308824 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:48 crc kubenswrapper[4833]: I0217 13:46:48.308842 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:48 crc kubenswrapper[4833]: I0217 13:46:48.308868 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:48 crc kubenswrapper[4833]: I0217 13:46:48.308935 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:48Z","lastTransitionTime":"2026-02-17T13:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:48 crc kubenswrapper[4833]: E0217 13:46:48.329020 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"858a93ea-15f0-4bac-8fa3-badb79f68871\\\",\\\"systemUUID\\\":\\\"648b67a2-27e7-447a-8ad2-7acc2e737df4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:48Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:48 crc kubenswrapper[4833]: I0217 13:46:48.335590 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:48 crc kubenswrapper[4833]: I0217 13:46:48.335667 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:48 crc kubenswrapper[4833]: I0217 13:46:48.335692 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:48 crc kubenswrapper[4833]: I0217 13:46:48.335722 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:48 crc kubenswrapper[4833]: I0217 13:46:48.335745 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:48Z","lastTransitionTime":"2026-02-17T13:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:48 crc kubenswrapper[4833]: E0217 13:46:48.352850 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"858a93ea-15f0-4bac-8fa3-badb79f68871\\\",\\\"systemUUID\\\":\\\"648b67a2-27e7-447a-8ad2-7acc2e737df4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:48Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:48 crc kubenswrapper[4833]: I0217 13:46:48.358492 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:48 crc kubenswrapper[4833]: I0217 13:46:48.358593 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:48 crc kubenswrapper[4833]: I0217 13:46:48.358612 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:48 crc kubenswrapper[4833]: I0217 13:46:48.359135 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:48 crc kubenswrapper[4833]: I0217 13:46:48.359389 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:48Z","lastTransitionTime":"2026-02-17T13:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:48 crc kubenswrapper[4833]: E0217 13:46:48.382966 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:46:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"858a93ea-15f0-4bac-8fa3-badb79f68871\\\",\\\"systemUUID\\\":\\\"648b67a2-27e7-447a-8ad2-7acc2e737df4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:48Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:48 crc kubenswrapper[4833]: E0217 13:46:48.383211 4833 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 13:46:48 crc kubenswrapper[4833]: I0217 13:46:48.384976 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:48 crc kubenswrapper[4833]: I0217 13:46:48.385028 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:48 crc kubenswrapper[4833]: I0217 13:46:48.385066 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:48 crc kubenswrapper[4833]: I0217 13:46:48.385088 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:48 crc kubenswrapper[4833]: I0217 13:46:48.385103 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:48Z","lastTransitionTime":"2026-02-17T13:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:48 crc kubenswrapper[4833]: I0217 13:46:48.488513 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:48 crc kubenswrapper[4833]: I0217 13:46:48.488573 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:48 crc kubenswrapper[4833]: I0217 13:46:48.488591 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:48 crc kubenswrapper[4833]: I0217 13:46:48.488616 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:48 crc kubenswrapper[4833]: I0217 13:46:48.488635 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:48Z","lastTransitionTime":"2026-02-17T13:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:48 crc kubenswrapper[4833]: I0217 13:46:48.591795 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:48 crc kubenswrapper[4833]: I0217 13:46:48.591861 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:48 crc kubenswrapper[4833]: I0217 13:46:48.591883 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:48 crc kubenswrapper[4833]: I0217 13:46:48.591912 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:48 crc kubenswrapper[4833]: I0217 13:46:48.591934 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:48Z","lastTransitionTime":"2026-02-17T13:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:48 crc kubenswrapper[4833]: I0217 13:46:48.599374 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c-metrics-certs\") pod \"network-metrics-daemon-4b7xf\" (UID: \"892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c\") " pod="openshift-multus/network-metrics-daemon-4b7xf" Feb 17 13:46:48 crc kubenswrapper[4833]: E0217 13:46:48.599529 4833 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 13:46:48 crc kubenswrapper[4833]: E0217 13:46:48.599621 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c-metrics-certs podName:892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c nodeName:}" failed. No retries permitted until 2026-02-17 13:47:52.599596347 +0000 UTC m=+162.234695820 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c-metrics-certs") pod "network-metrics-daemon-4b7xf" (UID: "892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 13:46:48 crc kubenswrapper[4833]: I0217 13:46:48.694767 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:48 crc kubenswrapper[4833]: I0217 13:46:48.694840 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:48 crc kubenswrapper[4833]: I0217 13:46:48.694859 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:48 crc kubenswrapper[4833]: I0217 13:46:48.695289 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:48 crc kubenswrapper[4833]: I0217 13:46:48.695338 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:48Z","lastTransitionTime":"2026-02-17T13:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:48 crc kubenswrapper[4833]: I0217 13:46:48.798145 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:48 crc kubenswrapper[4833]: I0217 13:46:48.798191 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:48 crc kubenswrapper[4833]: I0217 13:46:48.798200 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:48 crc kubenswrapper[4833]: I0217 13:46:48.798214 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:48 crc kubenswrapper[4833]: I0217 13:46:48.798223 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:48Z","lastTransitionTime":"2026-02-17T13:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:48 crc kubenswrapper[4833]: I0217 13:46:48.901336 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:48 crc kubenswrapper[4833]: I0217 13:46:48.901399 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:48 crc kubenswrapper[4833]: I0217 13:46:48.901416 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:48 crc kubenswrapper[4833]: I0217 13:46:48.901440 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:48 crc kubenswrapper[4833]: I0217 13:46:48.901458 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:48Z","lastTransitionTime":"2026-02-17T13:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:49 crc kubenswrapper[4833]: I0217 13:46:49.004224 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:49 crc kubenswrapper[4833]: I0217 13:46:49.004355 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:49 crc kubenswrapper[4833]: I0217 13:46:49.004383 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:49 crc kubenswrapper[4833]: I0217 13:46:49.004411 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:49 crc kubenswrapper[4833]: I0217 13:46:49.004434 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:49Z","lastTransitionTime":"2026-02-17T13:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:49 crc kubenswrapper[4833]: I0217 13:46:49.041032 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:46:49 crc kubenswrapper[4833]: E0217 13:46:49.041282 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:46:49 crc kubenswrapper[4833]: I0217 13:46:49.049457 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 18:08:42.496844073 +0000 UTC Feb 17 13:46:49 crc kubenswrapper[4833]: I0217 13:46:49.107837 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:49 crc kubenswrapper[4833]: I0217 13:46:49.107887 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:49 crc kubenswrapper[4833]: I0217 13:46:49.107905 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:49 crc kubenswrapper[4833]: I0217 13:46:49.107930 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:49 crc kubenswrapper[4833]: I0217 13:46:49.107948 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:49Z","lastTransitionTime":"2026-02-17T13:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:49 crc kubenswrapper[4833]: I0217 13:46:49.210902 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:49 crc kubenswrapper[4833]: I0217 13:46:49.210964 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:49 crc kubenswrapper[4833]: I0217 13:46:49.210983 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:49 crc kubenswrapper[4833]: I0217 13:46:49.211008 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:49 crc kubenswrapper[4833]: I0217 13:46:49.211034 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:49Z","lastTransitionTime":"2026-02-17T13:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:49 crc kubenswrapper[4833]: I0217 13:46:49.314332 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:49 crc kubenswrapper[4833]: I0217 13:46:49.314392 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:49 crc kubenswrapper[4833]: I0217 13:46:49.314409 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:49 crc kubenswrapper[4833]: I0217 13:46:49.314476 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:49 crc kubenswrapper[4833]: I0217 13:46:49.314534 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:49Z","lastTransitionTime":"2026-02-17T13:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:49 crc kubenswrapper[4833]: I0217 13:46:49.417517 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:49 crc kubenswrapper[4833]: I0217 13:46:49.417591 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:49 crc kubenswrapper[4833]: I0217 13:46:49.417616 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:49 crc kubenswrapper[4833]: I0217 13:46:49.417671 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:49 crc kubenswrapper[4833]: I0217 13:46:49.417696 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:49Z","lastTransitionTime":"2026-02-17T13:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:49 crc kubenswrapper[4833]: I0217 13:46:49.520378 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:49 crc kubenswrapper[4833]: I0217 13:46:49.520437 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:49 crc kubenswrapper[4833]: I0217 13:46:49.520454 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:49 crc kubenswrapper[4833]: I0217 13:46:49.520477 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:49 crc kubenswrapper[4833]: I0217 13:46:49.520491 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:49Z","lastTransitionTime":"2026-02-17T13:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:49 crc kubenswrapper[4833]: I0217 13:46:49.623357 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:49 crc kubenswrapper[4833]: I0217 13:46:49.623439 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:49 crc kubenswrapper[4833]: I0217 13:46:49.623462 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:49 crc kubenswrapper[4833]: I0217 13:46:49.623489 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:49 crc kubenswrapper[4833]: I0217 13:46:49.623506 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:49Z","lastTransitionTime":"2026-02-17T13:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:49 crc kubenswrapper[4833]: I0217 13:46:49.726931 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:49 crc kubenswrapper[4833]: I0217 13:46:49.726976 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:49 crc kubenswrapper[4833]: I0217 13:46:49.726988 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:49 crc kubenswrapper[4833]: I0217 13:46:49.727004 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:49 crc kubenswrapper[4833]: I0217 13:46:49.727016 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:49Z","lastTransitionTime":"2026-02-17T13:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:49 crc kubenswrapper[4833]: I0217 13:46:49.830140 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:49 crc kubenswrapper[4833]: I0217 13:46:49.830199 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:49 crc kubenswrapper[4833]: I0217 13:46:49.830219 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:49 crc kubenswrapper[4833]: I0217 13:46:49.830244 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:49 crc kubenswrapper[4833]: I0217 13:46:49.830263 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:49Z","lastTransitionTime":"2026-02-17T13:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:49 crc kubenswrapper[4833]: I0217 13:46:49.933508 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:49 crc kubenswrapper[4833]: I0217 13:46:49.933581 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:49 crc kubenswrapper[4833]: I0217 13:46:49.933598 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:49 crc kubenswrapper[4833]: I0217 13:46:49.933623 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:49 crc kubenswrapper[4833]: I0217 13:46:49.933642 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:49Z","lastTransitionTime":"2026-02-17T13:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:50 crc kubenswrapper[4833]: I0217 13:46:50.036715 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:50 crc kubenswrapper[4833]: I0217 13:46:50.036758 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:50 crc kubenswrapper[4833]: I0217 13:46:50.036769 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:50 crc kubenswrapper[4833]: I0217 13:46:50.036787 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:50 crc kubenswrapper[4833]: I0217 13:46:50.036823 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:50Z","lastTransitionTime":"2026-02-17T13:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:50 crc kubenswrapper[4833]: I0217 13:46:50.041205 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:46:50 crc kubenswrapper[4833]: I0217 13:46:50.041223 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:46:50 crc kubenswrapper[4833]: E0217 13:46:50.041377 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:46:50 crc kubenswrapper[4833]: E0217 13:46:50.041442 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:46:50 crc kubenswrapper[4833]: I0217 13:46:50.041223 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b7xf" Feb 17 13:46:50 crc kubenswrapper[4833]: E0217 13:46:50.041528 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b7xf" podUID="892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c" Feb 17 13:46:50 crc kubenswrapper[4833]: I0217 13:46:50.050239 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 21:39:20.633333584 +0000 UTC Feb 17 13:46:50 crc kubenswrapper[4833]: I0217 13:46:50.138999 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:50 crc kubenswrapper[4833]: I0217 13:46:50.139078 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:50 crc kubenswrapper[4833]: I0217 13:46:50.139093 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:50 crc kubenswrapper[4833]: I0217 13:46:50.139111 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:50 crc kubenswrapper[4833]: I0217 13:46:50.139122 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:50Z","lastTransitionTime":"2026-02-17T13:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:50 crc kubenswrapper[4833]: I0217 13:46:50.242179 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:50 crc kubenswrapper[4833]: I0217 13:46:50.242242 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:50 crc kubenswrapper[4833]: I0217 13:46:50.242251 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:50 crc kubenswrapper[4833]: I0217 13:46:50.242267 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:50 crc kubenswrapper[4833]: I0217 13:46:50.242277 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:50Z","lastTransitionTime":"2026-02-17T13:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:50 crc kubenswrapper[4833]: I0217 13:46:50.345338 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:50 crc kubenswrapper[4833]: I0217 13:46:50.345377 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:50 crc kubenswrapper[4833]: I0217 13:46:50.345388 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:50 crc kubenswrapper[4833]: I0217 13:46:50.345403 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:50 crc kubenswrapper[4833]: I0217 13:46:50.345414 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:50Z","lastTransitionTime":"2026-02-17T13:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:50 crc kubenswrapper[4833]: I0217 13:46:50.448293 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:50 crc kubenswrapper[4833]: I0217 13:46:50.448453 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:50 crc kubenswrapper[4833]: I0217 13:46:50.448488 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:50 crc kubenswrapper[4833]: I0217 13:46:50.448518 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:50 crc kubenswrapper[4833]: I0217 13:46:50.448539 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:50Z","lastTransitionTime":"2026-02-17T13:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:50 crc kubenswrapper[4833]: I0217 13:46:50.552355 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:50 crc kubenswrapper[4833]: I0217 13:46:50.552416 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:50 crc kubenswrapper[4833]: I0217 13:46:50.552433 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:50 crc kubenswrapper[4833]: I0217 13:46:50.552457 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:50 crc kubenswrapper[4833]: I0217 13:46:50.552475 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:50Z","lastTransitionTime":"2026-02-17T13:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:50 crc kubenswrapper[4833]: I0217 13:46:50.656224 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:50 crc kubenswrapper[4833]: I0217 13:46:50.656290 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:50 crc kubenswrapper[4833]: I0217 13:46:50.656311 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:50 crc kubenswrapper[4833]: I0217 13:46:50.656336 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:50 crc kubenswrapper[4833]: I0217 13:46:50.656354 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:50Z","lastTransitionTime":"2026-02-17T13:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:50 crc kubenswrapper[4833]: I0217 13:46:50.759194 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:50 crc kubenswrapper[4833]: I0217 13:46:50.759588 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:50 crc kubenswrapper[4833]: I0217 13:46:50.759611 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:50 crc kubenswrapper[4833]: I0217 13:46:50.759639 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:50 crc kubenswrapper[4833]: I0217 13:46:50.759658 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:50Z","lastTransitionTime":"2026-02-17T13:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:50 crc kubenswrapper[4833]: I0217 13:46:50.861450 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:50 crc kubenswrapper[4833]: I0217 13:46:50.861480 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:50 crc kubenswrapper[4833]: I0217 13:46:50.861491 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:50 crc kubenswrapper[4833]: I0217 13:46:50.861505 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:50 crc kubenswrapper[4833]: I0217 13:46:50.861515 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:50Z","lastTransitionTime":"2026-02-17T13:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:50 crc kubenswrapper[4833]: I0217 13:46:50.963690 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:50 crc kubenswrapper[4833]: I0217 13:46:50.963804 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:50 crc kubenswrapper[4833]: I0217 13:46:50.963830 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:50 crc kubenswrapper[4833]: I0217 13:46:50.963862 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:50 crc kubenswrapper[4833]: I0217 13:46:50.963886 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:50Z","lastTransitionTime":"2026-02-17T13:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:51 crc kubenswrapper[4833]: I0217 13:46:51.041487 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:46:51 crc kubenswrapper[4833]: E0217 13:46:51.041642 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:46:51 crc kubenswrapper[4833]: I0217 13:46:51.051024 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 00:10:36.876749155 +0000 UTC Feb 17 13:46:51 crc kubenswrapper[4833]: I0217 13:46:51.062240 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23fd4be9-debc-405d-ac05-5a5160593231\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f18bb5a144ef41de921b7b92ee560b76e3f9f9a626be4d5638db46e4688d256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84373535efe637bbd5cf8649523941f0e73ad51f06dd45c913741e2a8e7b775f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090ad37f43c1a891b1a2922b2a367e1b52ce56fd6a53e9e749f819bf281136a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://436df1e343e5ca9347dd98b9da03c74816b0bc0ec6cecd94dc3b21c38eea9b51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babf66dd8ad93003e9766e885c7626d4f6dab5bc46a6d714449a01065d7afea9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"message\\\":\\\"W0217 13:45:14.176886 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 13:45:14.177326 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771335914 cert, and key in /tmp/serving-cert-2529326345/serving-signer.crt, /tmp/serving-cert-2529326345/serving-signer.key\\\\nI0217 13:45:14.372413 1 observer_polling.go:159] Starting file observer\\\\nW0217 13:45:14.378661 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 13:45:14.378888 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:45:14.382093 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529326345/tls.crt::/tmp/serving-cert-2529326345/tls.key\\\\\\\"\\\\nF0217 13:45:14.604297 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f0450fd6122e12877e07084f693f67e63ac70bcecc962f7da5a32241ba14553\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:51 crc kubenswrapper[4833]: I0217 13:46:51.066445 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:51 crc kubenswrapper[4833]: I0217 13:46:51.066482 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:51 crc kubenswrapper[4833]: I0217 13:46:51.066494 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:51 crc kubenswrapper[4833]: I0217 13:46:51.066510 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:51 crc kubenswrapper[4833]: I0217 13:46:51.066521 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:51Z","lastTransitionTime":"2026-02-17T13:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:51 crc kubenswrapper[4833]: I0217 13:46:51.094495 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e08f74a-75d7-4dce-9297-27140e91962a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9a2013b70df34194f895ea7c2e5f2650966fa2d06518733fa4411893e5b2b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbc961a623ca1f4237f238856a41acfb724aadd5fa3feda70410c352b6f750c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80bdd44d3b326c9bb61f21b221c63d4647e2cc6edc95fe568b90e76f4329cf1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9bc54d1057546b9c8d7d1527bb2ae93c1c713b4d6e5879477157af90fc0d89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e88eaef317b2beb0c55811634e6752439b7c7725072f55a42f5ab49c2ac2385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a323322182b99e68f75876fff893a3e2df4bc168499981a1c5169c99dd9bebb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb921bbd8d49f63cfc615d147e402b4fe61c14c761dbcfe4ac50aa1b8dadcb7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91cd3065ca119d1ca52d0451ba2134b7e283be4fef19e5e6e32dbaab252bdfee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:51 crc kubenswrapper[4833]: I0217 13:46:51.113989 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5991f03-6fb7-4425-b041-26d760e380ae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae2cc390253214c6979cba64584e59e0342c1f750b16c569acfd885cb6b36c31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9702bc05160617e0c8ac8fd3d9a81244b5be2bf955ef58f9408fc0b42bea6609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5529e4fe84d20c564ff55b47e32df67ca6aac40d2629a6b5bf96e03a64b79676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e89d83530ec38b869e22ff9125ce8e680c2a274e471aedd447c9daee233acb1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e89d83530ec38b869e22ff9125ce8e680c2a274e471aedd447c9daee233acb1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:51 crc kubenswrapper[4833]: I0217 13:46:51.129320 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:51 crc kubenswrapper[4833]: I0217 13:46:51.148757 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wlt4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3b8d3ca-f768-4129-9c1a-b4866dd852d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efd76798e54bfcbad6d3a5f07396fe8579adcdb3d5bab3c303a9d31ad242e830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26d2bf39d36b6e8998e442f36f84198a363d7f22f56a05957a4a242459e6ff8b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T13:46:20Z\\\",\\\"message\\\":\\\"2026-02-17T13:45:34+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9e0f4ef6-02a5-40ec-920b-c377ffde18f5\\\\n2026-02-17T13:45:34+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9e0f4ef6-02a5-40ec-920b-c377ffde18f5 to /host/opt/cni/bin/\\\\n2026-02-17T13:45:35Z [verbose] multus-daemon started\\\\n2026-02-17T13:45:35Z [verbose] Readiness Indicator file check\\\\n2026-02-17T13:46:20Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9dxr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wlt4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:51 crc kubenswrapper[4833]: I0217 13:46:51.167999 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72c5918a-056f-446c-b138-a1be7140a5b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0c4e266b444d8ac1a67f9b5a6705f4b871e3930daf7707cd30b589a4cce24ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a89a54d9bedad8afe45f34ce0bc2324d0ca8590c328aad9d1557b2d92e5b81fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5c74f469d2d07cfd265ee1f625b957efc0f9d1f0efd5f483c962abdfd66a080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ee6e033fa9b29f59b186e39ec83d162b40e36aab28bb1e568a54c351a8c0c79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9da7ba3cbbca328926f2372a7e581793373ec876c961f7988617c668d958b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42e357a961b833cb840c8b33ce0db65d57cc5fb37789b00a6bcd3eb67d8e33f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b53fbee4db34f53cf5d58fa925e793cae75e34d7aeac4b10518659e2b89e177b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b53fbee4db34f53cf5d58fa925e793cae75e34d7aeac4b10518659e2b89e177b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T13:46:26Z\\\",\\\"message\\\":\\\"e to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:26Z is after 2025-08-24T17:21:41Z]\\\\nI0217 13:46:26.957178 6879 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-vx9xx\\\\nI0217 13:46:26.957185 6879 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-vx9xx\\\\nI0217 13:46:26.957192 6879 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-vx9xx in node crc\\\\nI0217 13:46:26.957197 6879 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-vx9xx after 0 failed attempt(s)\\\\nI0217 13:46:26.957203 6879 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-vx9xx\\\\nI0217 13:46:26.957215 6879 obj_retry.go:303] Retry object setup: *\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:46:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-7r9gt_openshift-ovn-kubernetes(72c5918a-056f-446c-b138-a1be7140a5b0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fdc1a29a9f5b3be51beb5d28d7a09f671e4f9effeb9ed122da196da8623abe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7r9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:51 crc kubenswrapper[4833]: I0217 13:46:51.169790 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:51 crc kubenswrapper[4833]: I0217 13:46:51.169835 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:51 crc kubenswrapper[4833]: I0217 13:46:51.169846 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:51 crc kubenswrapper[4833]: I0217 13:46:51.169859 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:51 crc kubenswrapper[4833]: I0217 13:46:51.169867 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:51Z","lastTransitionTime":"2026-02-17T13:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:51 crc kubenswrapper[4833]: I0217 13:46:51.181871 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5gxjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03c9f579-21ad-4977-a5e8-db9272a08557\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7400b87694dd74f41819e23ce3f617021e028cdcbfdad65b2213249f35d176d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s6vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5gxjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:51 crc kubenswrapper[4833]: I0217 13:46:51.196235 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xwfsh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"962a90ec-217a-4df0-8d83-2e6663953088\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cf73503bb300f6f71385f8da17c04a4b5d84691d5d7ca3b363b2335e4905c5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srrv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc181ab232df3bc23da46eabfb43d64f7c4e674152c27338d8ff5faf1c477bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srrv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xwfsh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:51 crc kubenswrapper[4833]: I0217 13:46:51.212602 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4b7xf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft275\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft275\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4b7xf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:51 crc kubenswrapper[4833]: I0217 13:46:51.224943 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a891e05e-5adf-4df4-9ad0-5d5326701e8e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08922642a7e0cf3749659b43cea0865de8509aa8c17e6137406830f1c897d6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0175a0e173dbfea894445c85d21d5bcd0d42cb4cfc8817d41ad2c466b61b96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0175a0e173dbfea894445c85d21d5bcd0d42cb4cfc8817d41ad2c466b61b96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:51 crc kubenswrapper[4833]: I0217 13:46:51.244436 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c1afc7-6071-464b-aedc-11234d869afe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e36c4174a69c22c608db515a3fe558ec0292c53ae56a43a945c8bb19bf4cdf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bb5bc7e6f182dd6271a6d16d8441e1f6833a32fd03858a3b91d16a896339c2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e477bf1f25d554946b3657a5bd18f922b18483902fd4f9c05a452a299cb4d920\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2260fe18b8f9f829f7dde85c9bdfc32895c8bfe48e0986c8f8205708e078ff37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:51 crc kubenswrapper[4833]: I0217 13:46:51.264629 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a207ffb6a17f606fde32eec5ae4222e2fdf02d79f61e4f959be55814c14daf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:51 crc kubenswrapper[4833]: I0217 13:46:51.273610 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:51 crc kubenswrapper[4833]: I0217 13:46:51.273642 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:51 crc kubenswrapper[4833]: I0217 13:46:51.273653 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:51 crc kubenswrapper[4833]: I0217 13:46:51.273668 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:51 crc kubenswrapper[4833]: I0217 13:46:51.273680 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:51Z","lastTransitionTime":"2026-02-17T13:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:51 crc kubenswrapper[4833]: I0217 13:46:51.278990 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b9b39aa9ac4f718b75bfeffba8e09bf2fefca68625037f565f3b237b6275db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:51 crc kubenswrapper[4833]: I0217 13:46:51.294193 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vx9xx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d13dbe6c-a57b-4011-9987-193ccf4939f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43d6637ecd7299bd61ba9b47f093048b122c90fc98990eb22155e8467c1dfd75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2w7kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vx9xx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:51 crc kubenswrapper[4833]: I0217 13:46:51.308432 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4a1ca83-1919-4f9c-82de-c849cbd50e70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4ea9f4f4051db850a31f5e1081095664c8764e1033e81823e75cd0072d13a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nq5ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89a1061eccd396579d2995d2d4ee3bd83f65f7e06a951b4d77897e687a7dcb78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nq5ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nmzvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:51 crc kubenswrapper[4833]: I0217 13:46:51.326986 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wxvlq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eb74ba2-a87a-415f-8978-8ef706346aa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f08f2aa3f0028f17b13b73c760f1471c3f335f3ae02be305bddd63552ec7330b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2569baf4273e04fbca0b38e0a65c945ba4b2767cba6f2b6647012328c041bab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2569baf4273e04fbca0b38e0a65c945ba4b2767cba6f2b6647012328c041bab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec44770454d862d13715225217505d12ffefd3f5a8dae0912a7ba8a9fbccb0bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec44770454d862d13715225217505d12ffefd3f5a8dae0912a7ba8a9fbccb0bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d28180d5fc08486a1492239ab1107c9b3856334091da9d2ad7ae0e6086fbd00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d28180d5fc08486a1492239ab1107c9b3856334091da9d2ad7ae0e6086fbd00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79641e6aa81b861f7dd3dadd0cc23f677727909d85f017c761bcbbb19450b5e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79641e6aa81b861f7dd3dadd0cc23f677727909d85f017c761bcbbb19450b5e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f4ea02b7d74057eddb0946756236afdb129b5ad326b153d66357ee42caca78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f4ea02b7d74057eddb0946756236afdb129b5ad326b153d66357ee42caca78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcd939d8a8deab6a94b6df237bd627850f5bb37bd29bd0988d16452dcbd87c9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcd939d8a8deab6a94b6df237bd627850f5bb37bd29bd0988d16452dcbd87c9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:45:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:45:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24pg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wxvlq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:51 crc kubenswrapper[4833]: I0217 13:46:51.342743 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:51 crc kubenswrapper[4833]: I0217 13:46:51.356686 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e5a2da13ff704ed5e89100bd3a88a6349d446c2af48f88acb399efb43dc42d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac53462f7a94ff04dc94b5065169da7ec43985a5a7e7dba779f23614b61b78fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:51 crc kubenswrapper[4833]: I0217 13:46:51.366881 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:45:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:46:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:46:51 crc kubenswrapper[4833]: I0217 13:46:51.375413 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:51 crc kubenswrapper[4833]: I0217 13:46:51.375440 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:51 crc kubenswrapper[4833]: I0217 13:46:51.375448 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:51 crc kubenswrapper[4833]: I0217 13:46:51.375460 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:51 crc kubenswrapper[4833]: I0217 13:46:51.375476 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:51Z","lastTransitionTime":"2026-02-17T13:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:51 crc kubenswrapper[4833]: I0217 13:46:51.478607 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:51 crc kubenswrapper[4833]: I0217 13:46:51.478689 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:51 crc kubenswrapper[4833]: I0217 13:46:51.478706 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:51 crc kubenswrapper[4833]: I0217 13:46:51.478757 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:51 crc kubenswrapper[4833]: I0217 13:46:51.478777 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:51Z","lastTransitionTime":"2026-02-17T13:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:51 crc kubenswrapper[4833]: I0217 13:46:51.580916 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:51 crc kubenswrapper[4833]: I0217 13:46:51.580966 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:51 crc kubenswrapper[4833]: I0217 13:46:51.580982 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:51 crc kubenswrapper[4833]: I0217 13:46:51.581002 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:51 crc kubenswrapper[4833]: I0217 13:46:51.581014 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:51Z","lastTransitionTime":"2026-02-17T13:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:51 crc kubenswrapper[4833]: I0217 13:46:51.683849 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:51 crc kubenswrapper[4833]: I0217 13:46:51.683883 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:51 crc kubenswrapper[4833]: I0217 13:46:51.683894 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:51 crc kubenswrapper[4833]: I0217 13:46:51.683907 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:51 crc kubenswrapper[4833]: I0217 13:46:51.683916 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:51Z","lastTransitionTime":"2026-02-17T13:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:51 crc kubenswrapper[4833]: I0217 13:46:51.787425 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:51 crc kubenswrapper[4833]: I0217 13:46:51.787453 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:51 crc kubenswrapper[4833]: I0217 13:46:51.787462 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:51 crc kubenswrapper[4833]: I0217 13:46:51.787475 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:51 crc kubenswrapper[4833]: I0217 13:46:51.787484 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:51Z","lastTransitionTime":"2026-02-17T13:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:51 crc kubenswrapper[4833]: I0217 13:46:51.890987 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:51 crc kubenswrapper[4833]: I0217 13:46:51.891091 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:51 crc kubenswrapper[4833]: I0217 13:46:51.891111 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:51 crc kubenswrapper[4833]: I0217 13:46:51.891140 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:51 crc kubenswrapper[4833]: I0217 13:46:51.891162 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:51Z","lastTransitionTime":"2026-02-17T13:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:51 crc kubenswrapper[4833]: I0217 13:46:51.993871 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:51 crc kubenswrapper[4833]: I0217 13:46:51.993914 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:51 crc kubenswrapper[4833]: I0217 13:46:51.993925 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:51 crc kubenswrapper[4833]: I0217 13:46:51.993941 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:51 crc kubenswrapper[4833]: I0217 13:46:51.993953 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:51Z","lastTransitionTime":"2026-02-17T13:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:52 crc kubenswrapper[4833]: I0217 13:46:52.041157 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:46:52 crc kubenswrapper[4833]: I0217 13:46:52.041397 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:46:52 crc kubenswrapper[4833]: I0217 13:46:52.041445 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b7xf" Feb 17 13:46:52 crc kubenswrapper[4833]: E0217 13:46:52.041604 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:46:52 crc kubenswrapper[4833]: E0217 13:46:52.041707 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b7xf" podUID="892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c" Feb 17 13:46:52 crc kubenswrapper[4833]: E0217 13:46:52.041817 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:46:52 crc kubenswrapper[4833]: I0217 13:46:52.051607 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 18:29:36.414254975 +0000 UTC Feb 17 13:46:52 crc kubenswrapper[4833]: I0217 13:46:52.095991 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:52 crc kubenswrapper[4833]: I0217 13:46:52.096033 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:52 crc kubenswrapper[4833]: I0217 13:46:52.096071 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:52 crc kubenswrapper[4833]: I0217 13:46:52.096086 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:52 crc kubenswrapper[4833]: I0217 13:46:52.096098 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:52Z","lastTransitionTime":"2026-02-17T13:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:52 crc kubenswrapper[4833]: I0217 13:46:52.198606 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:52 crc kubenswrapper[4833]: I0217 13:46:52.198744 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:52 crc kubenswrapper[4833]: I0217 13:46:52.198754 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:52 crc kubenswrapper[4833]: I0217 13:46:52.198776 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:52 crc kubenswrapper[4833]: I0217 13:46:52.198788 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:52Z","lastTransitionTime":"2026-02-17T13:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:52 crc kubenswrapper[4833]: I0217 13:46:52.302488 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:52 crc kubenswrapper[4833]: I0217 13:46:52.302553 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:52 crc kubenswrapper[4833]: I0217 13:46:52.302570 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:52 crc kubenswrapper[4833]: I0217 13:46:52.302600 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:52 crc kubenswrapper[4833]: I0217 13:46:52.302617 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:52Z","lastTransitionTime":"2026-02-17T13:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:52 crc kubenswrapper[4833]: I0217 13:46:52.409273 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:52 crc kubenswrapper[4833]: I0217 13:46:52.409366 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:52 crc kubenswrapper[4833]: I0217 13:46:52.409453 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:52 crc kubenswrapper[4833]: I0217 13:46:52.409524 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:52 crc kubenswrapper[4833]: I0217 13:46:52.409551 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:52Z","lastTransitionTime":"2026-02-17T13:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:52 crc kubenswrapper[4833]: I0217 13:46:52.512932 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:52 crc kubenswrapper[4833]: I0217 13:46:52.513021 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:52 crc kubenswrapper[4833]: I0217 13:46:52.513101 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:52 crc kubenswrapper[4833]: I0217 13:46:52.513130 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:52 crc kubenswrapper[4833]: I0217 13:46:52.513149 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:52Z","lastTransitionTime":"2026-02-17T13:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:52 crc kubenswrapper[4833]: I0217 13:46:52.616749 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:52 crc kubenswrapper[4833]: I0217 13:46:52.616842 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:52 crc kubenswrapper[4833]: I0217 13:46:52.616861 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:52 crc kubenswrapper[4833]: I0217 13:46:52.616885 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:52 crc kubenswrapper[4833]: I0217 13:46:52.616931 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:52Z","lastTransitionTime":"2026-02-17T13:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:52 crc kubenswrapper[4833]: I0217 13:46:52.720542 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:52 crc kubenswrapper[4833]: I0217 13:46:52.720719 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:52 crc kubenswrapper[4833]: I0217 13:46:52.720741 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:52 crc kubenswrapper[4833]: I0217 13:46:52.720764 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:52 crc kubenswrapper[4833]: I0217 13:46:52.720782 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:52Z","lastTransitionTime":"2026-02-17T13:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:52 crc kubenswrapper[4833]: I0217 13:46:52.823949 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:52 crc kubenswrapper[4833]: I0217 13:46:52.824001 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:52 crc kubenswrapper[4833]: I0217 13:46:52.824022 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:52 crc kubenswrapper[4833]: I0217 13:46:52.824070 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:52 crc kubenswrapper[4833]: I0217 13:46:52.824088 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:52Z","lastTransitionTime":"2026-02-17T13:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:52 crc kubenswrapper[4833]: I0217 13:46:52.927256 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:52 crc kubenswrapper[4833]: I0217 13:46:52.927310 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:52 crc kubenswrapper[4833]: I0217 13:46:52.927329 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:52 crc kubenswrapper[4833]: I0217 13:46:52.927354 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:52 crc kubenswrapper[4833]: I0217 13:46:52.927371 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:52Z","lastTransitionTime":"2026-02-17T13:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:53 crc kubenswrapper[4833]: I0217 13:46:53.030823 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:53 crc kubenswrapper[4833]: I0217 13:46:53.030909 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:53 crc kubenswrapper[4833]: I0217 13:46:53.030932 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:53 crc kubenswrapper[4833]: I0217 13:46:53.030964 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:53 crc kubenswrapper[4833]: I0217 13:46:53.030986 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:53Z","lastTransitionTime":"2026-02-17T13:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:53 crc kubenswrapper[4833]: I0217 13:46:53.040732 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:46:53 crc kubenswrapper[4833]: E0217 13:46:53.041219 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:46:53 crc kubenswrapper[4833]: I0217 13:46:53.041528 4833 scope.go:117] "RemoveContainer" containerID="b53fbee4db34f53cf5d58fa925e793cae75e34d7aeac4b10518659e2b89e177b" Feb 17 13:46:53 crc kubenswrapper[4833]: E0217 13:46:53.041741 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-7r9gt_openshift-ovn-kubernetes(72c5918a-056f-446c-b138-a1be7140a5b0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" podUID="72c5918a-056f-446c-b138-a1be7140a5b0" Feb 17 13:46:53 crc kubenswrapper[4833]: I0217 13:46:53.052128 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 22:07:31.932561513 +0000 UTC Feb 17 13:46:53 crc kubenswrapper[4833]: I0217 13:46:53.134093 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:53 crc kubenswrapper[4833]: I0217 13:46:53.134430 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:53 crc kubenswrapper[4833]: I0217 13:46:53.134549 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:53 crc kubenswrapper[4833]: I0217 13:46:53.134649 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:53 crc kubenswrapper[4833]: I0217 13:46:53.134756 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:53Z","lastTransitionTime":"2026-02-17T13:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:53 crc kubenswrapper[4833]: I0217 13:46:53.237381 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:53 crc kubenswrapper[4833]: I0217 13:46:53.237430 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:53 crc kubenswrapper[4833]: I0217 13:46:53.237446 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:53 crc kubenswrapper[4833]: I0217 13:46:53.237466 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:53 crc kubenswrapper[4833]: I0217 13:46:53.237482 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:53Z","lastTransitionTime":"2026-02-17T13:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:53 crc kubenswrapper[4833]: I0217 13:46:53.340974 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:53 crc kubenswrapper[4833]: I0217 13:46:53.341389 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:53 crc kubenswrapper[4833]: I0217 13:46:53.341715 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:53 crc kubenswrapper[4833]: I0217 13:46:53.342135 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:53 crc kubenswrapper[4833]: I0217 13:46:53.342483 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:53Z","lastTransitionTime":"2026-02-17T13:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:53 crc kubenswrapper[4833]: I0217 13:46:53.446268 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:53 crc kubenswrapper[4833]: I0217 13:46:53.446576 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:53 crc kubenswrapper[4833]: I0217 13:46:53.446812 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:53 crc kubenswrapper[4833]: I0217 13:46:53.447004 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:53 crc kubenswrapper[4833]: I0217 13:46:53.447196 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:53Z","lastTransitionTime":"2026-02-17T13:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:53 crc kubenswrapper[4833]: I0217 13:46:53.550681 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:53 crc kubenswrapper[4833]: I0217 13:46:53.551251 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:53 crc kubenswrapper[4833]: I0217 13:46:53.551402 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:53 crc kubenswrapper[4833]: I0217 13:46:53.551550 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:53 crc kubenswrapper[4833]: I0217 13:46:53.551709 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:53Z","lastTransitionTime":"2026-02-17T13:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:53 crc kubenswrapper[4833]: I0217 13:46:53.655327 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:53 crc kubenswrapper[4833]: I0217 13:46:53.655708 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:53 crc kubenswrapper[4833]: I0217 13:46:53.655851 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:53 crc kubenswrapper[4833]: I0217 13:46:53.655992 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:53 crc kubenswrapper[4833]: I0217 13:46:53.656154 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:53Z","lastTransitionTime":"2026-02-17T13:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:53 crc kubenswrapper[4833]: I0217 13:46:53.759480 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:53 crc kubenswrapper[4833]: I0217 13:46:53.759549 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:53 crc kubenswrapper[4833]: I0217 13:46:53.759570 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:53 crc kubenswrapper[4833]: I0217 13:46:53.759598 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:53 crc kubenswrapper[4833]: I0217 13:46:53.759619 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:53Z","lastTransitionTime":"2026-02-17T13:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:53 crc kubenswrapper[4833]: I0217 13:46:53.863300 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:53 crc kubenswrapper[4833]: I0217 13:46:53.863361 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:53 crc kubenswrapper[4833]: I0217 13:46:53.863378 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:53 crc kubenswrapper[4833]: I0217 13:46:53.863402 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:53 crc kubenswrapper[4833]: I0217 13:46:53.863418 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:53Z","lastTransitionTime":"2026-02-17T13:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:53 crc kubenswrapper[4833]: I0217 13:46:53.966702 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:53 crc kubenswrapper[4833]: I0217 13:46:53.966774 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:53 crc kubenswrapper[4833]: I0217 13:46:53.966798 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:53 crc kubenswrapper[4833]: I0217 13:46:53.966829 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:53 crc kubenswrapper[4833]: I0217 13:46:53.966852 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:53Z","lastTransitionTime":"2026-02-17T13:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:54 crc kubenswrapper[4833]: I0217 13:46:54.041563 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b7xf" Feb 17 13:46:54 crc kubenswrapper[4833]: I0217 13:46:54.041636 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:46:54 crc kubenswrapper[4833]: I0217 13:46:54.041725 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:46:54 crc kubenswrapper[4833]: E0217 13:46:54.041872 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b7xf" podUID="892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c" Feb 17 13:46:54 crc kubenswrapper[4833]: E0217 13:46:54.042106 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:46:54 crc kubenswrapper[4833]: E0217 13:46:54.042232 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:46:54 crc kubenswrapper[4833]: I0217 13:46:54.053066 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 14:33:09.951685991 +0000 UTC Feb 17 13:46:54 crc kubenswrapper[4833]: I0217 13:46:54.070158 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:54 crc kubenswrapper[4833]: I0217 13:46:54.070214 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:54 crc kubenswrapper[4833]: I0217 13:46:54.070233 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:54 crc kubenswrapper[4833]: I0217 13:46:54.070262 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:54 crc kubenswrapper[4833]: I0217 13:46:54.070284 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:54Z","lastTransitionTime":"2026-02-17T13:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:54 crc kubenswrapper[4833]: I0217 13:46:54.174371 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:54 crc kubenswrapper[4833]: I0217 13:46:54.174429 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:54 crc kubenswrapper[4833]: I0217 13:46:54.174451 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:54 crc kubenswrapper[4833]: I0217 13:46:54.174477 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:54 crc kubenswrapper[4833]: I0217 13:46:54.174499 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:54Z","lastTransitionTime":"2026-02-17T13:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:54 crc kubenswrapper[4833]: I0217 13:46:54.278384 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:54 crc kubenswrapper[4833]: I0217 13:46:54.278453 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:54 crc kubenswrapper[4833]: I0217 13:46:54.278472 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:54 crc kubenswrapper[4833]: I0217 13:46:54.278501 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:54 crc kubenswrapper[4833]: I0217 13:46:54.278522 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:54Z","lastTransitionTime":"2026-02-17T13:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:54 crc kubenswrapper[4833]: I0217 13:46:54.382464 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:54 crc kubenswrapper[4833]: I0217 13:46:54.382547 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:54 crc kubenswrapper[4833]: I0217 13:46:54.382576 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:54 crc kubenswrapper[4833]: I0217 13:46:54.382613 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:54 crc kubenswrapper[4833]: I0217 13:46:54.382639 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:54Z","lastTransitionTime":"2026-02-17T13:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:54 crc kubenswrapper[4833]: I0217 13:46:54.485660 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:54 crc kubenswrapper[4833]: I0217 13:46:54.485710 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:54 crc kubenswrapper[4833]: I0217 13:46:54.485719 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:54 crc kubenswrapper[4833]: I0217 13:46:54.485743 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:54 crc kubenswrapper[4833]: I0217 13:46:54.485764 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:54Z","lastTransitionTime":"2026-02-17T13:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:54 crc kubenswrapper[4833]: I0217 13:46:54.588699 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:54 crc kubenswrapper[4833]: I0217 13:46:54.588750 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:54 crc kubenswrapper[4833]: I0217 13:46:54.588762 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:54 crc kubenswrapper[4833]: I0217 13:46:54.588779 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:54 crc kubenswrapper[4833]: I0217 13:46:54.588789 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:54Z","lastTransitionTime":"2026-02-17T13:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:54 crc kubenswrapper[4833]: I0217 13:46:54.692506 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:54 crc kubenswrapper[4833]: I0217 13:46:54.692587 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:54 crc kubenswrapper[4833]: I0217 13:46:54.692599 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:54 crc kubenswrapper[4833]: I0217 13:46:54.692622 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:54 crc kubenswrapper[4833]: I0217 13:46:54.692634 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:54Z","lastTransitionTime":"2026-02-17T13:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:54 crc kubenswrapper[4833]: I0217 13:46:54.796995 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:54 crc kubenswrapper[4833]: I0217 13:46:54.797081 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:54 crc kubenswrapper[4833]: I0217 13:46:54.797102 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:54 crc kubenswrapper[4833]: I0217 13:46:54.797146 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:54 crc kubenswrapper[4833]: I0217 13:46:54.797168 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:54Z","lastTransitionTime":"2026-02-17T13:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:54 crc kubenswrapper[4833]: I0217 13:46:54.901112 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:54 crc kubenswrapper[4833]: I0217 13:46:54.901152 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:54 crc kubenswrapper[4833]: I0217 13:46:54.901182 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:54 crc kubenswrapper[4833]: I0217 13:46:54.901200 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:54 crc kubenswrapper[4833]: I0217 13:46:54.901211 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:54Z","lastTransitionTime":"2026-02-17T13:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:55 crc kubenswrapper[4833]: I0217 13:46:55.005031 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:55 crc kubenswrapper[4833]: I0217 13:46:55.005146 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:55 crc kubenswrapper[4833]: I0217 13:46:55.005168 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:55 crc kubenswrapper[4833]: I0217 13:46:55.005195 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:55 crc kubenswrapper[4833]: I0217 13:46:55.005215 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:55Z","lastTransitionTime":"2026-02-17T13:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:55 crc kubenswrapper[4833]: I0217 13:46:55.044409 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:46:55 crc kubenswrapper[4833]: E0217 13:46:55.044649 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:46:55 crc kubenswrapper[4833]: I0217 13:46:55.054076 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 21:48:05.816953315 +0000 UTC Feb 17 13:46:55 crc kubenswrapper[4833]: I0217 13:46:55.108073 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:55 crc kubenswrapper[4833]: I0217 13:46:55.108154 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:55 crc kubenswrapper[4833]: I0217 13:46:55.108174 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:55 crc kubenswrapper[4833]: I0217 13:46:55.108203 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:55 crc kubenswrapper[4833]: I0217 13:46:55.108221 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:55Z","lastTransitionTime":"2026-02-17T13:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:55 crc kubenswrapper[4833]: I0217 13:46:55.211871 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:55 crc kubenswrapper[4833]: I0217 13:46:55.211970 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:55 crc kubenswrapper[4833]: I0217 13:46:55.211991 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:55 crc kubenswrapper[4833]: I0217 13:46:55.212019 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:55 crc kubenswrapper[4833]: I0217 13:46:55.212073 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:55Z","lastTransitionTime":"2026-02-17T13:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:55 crc kubenswrapper[4833]: I0217 13:46:55.315505 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:55 crc kubenswrapper[4833]: I0217 13:46:55.315554 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:55 crc kubenswrapper[4833]: I0217 13:46:55.315572 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:55 crc kubenswrapper[4833]: I0217 13:46:55.315598 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:55 crc kubenswrapper[4833]: I0217 13:46:55.315618 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:55Z","lastTransitionTime":"2026-02-17T13:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:55 crc kubenswrapper[4833]: I0217 13:46:55.419125 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:55 crc kubenswrapper[4833]: I0217 13:46:55.419261 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:55 crc kubenswrapper[4833]: I0217 13:46:55.419280 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:55 crc kubenswrapper[4833]: I0217 13:46:55.419309 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:55 crc kubenswrapper[4833]: I0217 13:46:55.419328 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:55Z","lastTransitionTime":"2026-02-17T13:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:55 crc kubenswrapper[4833]: I0217 13:46:55.522380 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:55 crc kubenswrapper[4833]: I0217 13:46:55.522417 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:55 crc kubenswrapper[4833]: I0217 13:46:55.522428 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:55 crc kubenswrapper[4833]: I0217 13:46:55.522466 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:55 crc kubenswrapper[4833]: I0217 13:46:55.522477 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:55Z","lastTransitionTime":"2026-02-17T13:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:55 crc kubenswrapper[4833]: I0217 13:46:55.625956 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:55 crc kubenswrapper[4833]: I0217 13:46:55.626492 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:55 crc kubenswrapper[4833]: I0217 13:46:55.626686 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:55 crc kubenswrapper[4833]: I0217 13:46:55.626851 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:55 crc kubenswrapper[4833]: I0217 13:46:55.627005 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:55Z","lastTransitionTime":"2026-02-17T13:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:55 crc kubenswrapper[4833]: I0217 13:46:55.729909 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:55 crc kubenswrapper[4833]: I0217 13:46:55.730408 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:55 crc kubenswrapper[4833]: I0217 13:46:55.730625 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:55 crc kubenswrapper[4833]: I0217 13:46:55.730957 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:55 crc kubenswrapper[4833]: I0217 13:46:55.731277 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:55Z","lastTransitionTime":"2026-02-17T13:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:55 crc kubenswrapper[4833]: I0217 13:46:55.834327 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:55 crc kubenswrapper[4833]: I0217 13:46:55.834703 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:55 crc kubenswrapper[4833]: I0217 13:46:55.834856 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:55 crc kubenswrapper[4833]: I0217 13:46:55.835005 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:55 crc kubenswrapper[4833]: I0217 13:46:55.835203 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:55Z","lastTransitionTime":"2026-02-17T13:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:55 crc kubenswrapper[4833]: I0217 13:46:55.938318 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:55 crc kubenswrapper[4833]: I0217 13:46:55.938358 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:55 crc kubenswrapper[4833]: I0217 13:46:55.938371 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:55 crc kubenswrapper[4833]: I0217 13:46:55.938386 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:55 crc kubenswrapper[4833]: I0217 13:46:55.938397 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:55Z","lastTransitionTime":"2026-02-17T13:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:56 crc kubenswrapper[4833]: I0217 13:46:56.040719 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:46:56 crc kubenswrapper[4833]: I0217 13:46:56.040779 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b7xf" Feb 17 13:46:56 crc kubenswrapper[4833]: E0217 13:46:56.041364 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:46:56 crc kubenswrapper[4833]: I0217 13:46:56.041472 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:56 crc kubenswrapper[4833]: I0217 13:46:56.041721 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:56 crc kubenswrapper[4833]: I0217 13:46:56.041803 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:56 crc kubenswrapper[4833]: I0217 13:46:56.041905 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:56 crc kubenswrapper[4833]: E0217 13:46:56.041571 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b7xf" podUID="892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c" Feb 17 13:46:56 crc kubenswrapper[4833]: I0217 13:46:56.041995 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:56Z","lastTransitionTime":"2026-02-17T13:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:56 crc kubenswrapper[4833]: I0217 13:46:56.042441 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:46:56 crc kubenswrapper[4833]: E0217 13:46:56.042615 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:46:56 crc kubenswrapper[4833]: I0217 13:46:56.055104 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 13:57:14.17150294 +0000 UTC Feb 17 13:46:56 crc kubenswrapper[4833]: I0217 13:46:56.145124 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:56 crc kubenswrapper[4833]: I0217 13:46:56.145181 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:56 crc kubenswrapper[4833]: I0217 13:46:56.145193 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:56 crc kubenswrapper[4833]: I0217 13:46:56.145212 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:56 crc kubenswrapper[4833]: I0217 13:46:56.145225 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:56Z","lastTransitionTime":"2026-02-17T13:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:56 crc kubenswrapper[4833]: I0217 13:46:56.248097 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:56 crc kubenswrapper[4833]: I0217 13:46:56.248155 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:56 crc kubenswrapper[4833]: I0217 13:46:56.248171 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:56 crc kubenswrapper[4833]: I0217 13:46:56.248196 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:56 crc kubenswrapper[4833]: I0217 13:46:56.248215 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:56Z","lastTransitionTime":"2026-02-17T13:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:56 crc kubenswrapper[4833]: I0217 13:46:56.351799 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:56 crc kubenswrapper[4833]: I0217 13:46:56.351869 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:56 crc kubenswrapper[4833]: I0217 13:46:56.351887 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:56 crc kubenswrapper[4833]: I0217 13:46:56.351914 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:56 crc kubenswrapper[4833]: I0217 13:46:56.351934 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:56Z","lastTransitionTime":"2026-02-17T13:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:56 crc kubenswrapper[4833]: I0217 13:46:56.455617 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:56 crc kubenswrapper[4833]: I0217 13:46:56.455662 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:56 crc kubenswrapper[4833]: I0217 13:46:56.455675 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:56 crc kubenswrapper[4833]: I0217 13:46:56.455691 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:56 crc kubenswrapper[4833]: I0217 13:46:56.455704 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:56Z","lastTransitionTime":"2026-02-17T13:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:56 crc kubenswrapper[4833]: I0217 13:46:56.559179 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:56 crc kubenswrapper[4833]: I0217 13:46:56.559243 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:56 crc kubenswrapper[4833]: I0217 13:46:56.559261 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:56 crc kubenswrapper[4833]: I0217 13:46:56.559293 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:56 crc kubenswrapper[4833]: I0217 13:46:56.559313 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:56Z","lastTransitionTime":"2026-02-17T13:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:56 crc kubenswrapper[4833]: I0217 13:46:56.662181 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:56 crc kubenswrapper[4833]: I0217 13:46:56.662236 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:56 crc kubenswrapper[4833]: I0217 13:46:56.662249 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:56 crc kubenswrapper[4833]: I0217 13:46:56.662271 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:56 crc kubenswrapper[4833]: I0217 13:46:56.662287 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:56Z","lastTransitionTime":"2026-02-17T13:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:56 crc kubenswrapper[4833]: I0217 13:46:56.765497 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:56 crc kubenswrapper[4833]: I0217 13:46:56.765561 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:56 crc kubenswrapper[4833]: I0217 13:46:56.765574 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:56 crc kubenswrapper[4833]: I0217 13:46:56.765595 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:56 crc kubenswrapper[4833]: I0217 13:46:56.765609 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:56Z","lastTransitionTime":"2026-02-17T13:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:56 crc kubenswrapper[4833]: I0217 13:46:56.867972 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:56 crc kubenswrapper[4833]: I0217 13:46:56.868019 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:56 crc kubenswrapper[4833]: I0217 13:46:56.868029 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:56 crc kubenswrapper[4833]: I0217 13:46:56.868073 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:56 crc kubenswrapper[4833]: I0217 13:46:56.868085 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:56Z","lastTransitionTime":"2026-02-17T13:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:56 crc kubenswrapper[4833]: I0217 13:46:56.971489 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:56 crc kubenswrapper[4833]: I0217 13:46:56.971542 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:56 crc kubenswrapper[4833]: I0217 13:46:56.971560 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:56 crc kubenswrapper[4833]: I0217 13:46:56.971583 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:56 crc kubenswrapper[4833]: I0217 13:46:56.971600 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:56Z","lastTransitionTime":"2026-02-17T13:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:57 crc kubenswrapper[4833]: I0217 13:46:57.040592 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:46:57 crc kubenswrapper[4833]: E0217 13:46:57.040772 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:46:57 crc kubenswrapper[4833]: I0217 13:46:57.056201 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 16:21:14.832956271 +0000 UTC Feb 17 13:46:57 crc kubenswrapper[4833]: I0217 13:46:57.074744 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:57 crc kubenswrapper[4833]: I0217 13:46:57.074858 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:57 crc kubenswrapper[4833]: I0217 13:46:57.074879 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:57 crc kubenswrapper[4833]: I0217 13:46:57.074898 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:57 crc kubenswrapper[4833]: I0217 13:46:57.074912 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:57Z","lastTransitionTime":"2026-02-17T13:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:57 crc kubenswrapper[4833]: I0217 13:46:57.178537 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:57 crc kubenswrapper[4833]: I0217 13:46:57.178616 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:57 crc kubenswrapper[4833]: I0217 13:46:57.178635 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:57 crc kubenswrapper[4833]: I0217 13:46:57.178663 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:57 crc kubenswrapper[4833]: I0217 13:46:57.178683 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:57Z","lastTransitionTime":"2026-02-17T13:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:57 crc kubenswrapper[4833]: I0217 13:46:57.281500 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:57 crc kubenswrapper[4833]: I0217 13:46:57.281572 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:57 crc kubenswrapper[4833]: I0217 13:46:57.281587 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:57 crc kubenswrapper[4833]: I0217 13:46:57.281609 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:57 crc kubenswrapper[4833]: I0217 13:46:57.281624 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:57Z","lastTransitionTime":"2026-02-17T13:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:57 crc kubenswrapper[4833]: I0217 13:46:57.385878 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:57 crc kubenswrapper[4833]: I0217 13:46:57.385963 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:57 crc kubenswrapper[4833]: I0217 13:46:57.385983 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:57 crc kubenswrapper[4833]: I0217 13:46:57.386009 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:57 crc kubenswrapper[4833]: I0217 13:46:57.386028 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:57Z","lastTransitionTime":"2026-02-17T13:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:57 crc kubenswrapper[4833]: I0217 13:46:57.490287 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:57 crc kubenswrapper[4833]: I0217 13:46:57.490375 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:57 crc kubenswrapper[4833]: I0217 13:46:57.490400 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:57 crc kubenswrapper[4833]: I0217 13:46:57.490433 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:57 crc kubenswrapper[4833]: I0217 13:46:57.490457 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:57Z","lastTransitionTime":"2026-02-17T13:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:57 crc kubenswrapper[4833]: I0217 13:46:57.594181 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:57 crc kubenswrapper[4833]: I0217 13:46:57.594254 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:57 crc kubenswrapper[4833]: I0217 13:46:57.594269 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:57 crc kubenswrapper[4833]: I0217 13:46:57.594293 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:57 crc kubenswrapper[4833]: I0217 13:46:57.594307 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:57Z","lastTransitionTime":"2026-02-17T13:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:57 crc kubenswrapper[4833]: I0217 13:46:57.697907 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:57 crc kubenswrapper[4833]: I0217 13:46:57.697996 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:57 crc kubenswrapper[4833]: I0217 13:46:57.698022 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:57 crc kubenswrapper[4833]: I0217 13:46:57.698085 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:57 crc kubenswrapper[4833]: I0217 13:46:57.698111 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:57Z","lastTransitionTime":"2026-02-17T13:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:57 crc kubenswrapper[4833]: I0217 13:46:57.801535 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:57 crc kubenswrapper[4833]: I0217 13:46:57.801612 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:57 crc kubenswrapper[4833]: I0217 13:46:57.801631 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:57 crc kubenswrapper[4833]: I0217 13:46:57.801659 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:57 crc kubenswrapper[4833]: I0217 13:46:57.801681 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:57Z","lastTransitionTime":"2026-02-17T13:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:57 crc kubenswrapper[4833]: I0217 13:46:57.905409 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:57 crc kubenswrapper[4833]: I0217 13:46:57.905481 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:57 crc kubenswrapper[4833]: I0217 13:46:57.905493 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:57 crc kubenswrapper[4833]: I0217 13:46:57.905517 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:57 crc kubenswrapper[4833]: I0217 13:46:57.905532 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:57Z","lastTransitionTime":"2026-02-17T13:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:58 crc kubenswrapper[4833]: I0217 13:46:58.009346 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:58 crc kubenswrapper[4833]: I0217 13:46:58.009422 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:58 crc kubenswrapper[4833]: I0217 13:46:58.009446 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:58 crc kubenswrapper[4833]: I0217 13:46:58.009478 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:58 crc kubenswrapper[4833]: I0217 13:46:58.009502 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:58Z","lastTransitionTime":"2026-02-17T13:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:58 crc kubenswrapper[4833]: I0217 13:46:58.040988 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:46:58 crc kubenswrapper[4833]: I0217 13:46:58.041092 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b7xf" Feb 17 13:46:58 crc kubenswrapper[4833]: I0217 13:46:58.041118 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:46:58 crc kubenswrapper[4833]: E0217 13:46:58.041238 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:46:58 crc kubenswrapper[4833]: E0217 13:46:58.041477 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:46:58 crc kubenswrapper[4833]: E0217 13:46:58.041568 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b7xf" podUID="892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c" Feb 17 13:46:58 crc kubenswrapper[4833]: I0217 13:46:58.057244 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 05:18:50.997635764 +0000 UTC Feb 17 13:46:58 crc kubenswrapper[4833]: I0217 13:46:58.113346 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:58 crc kubenswrapper[4833]: I0217 13:46:58.113413 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:58 crc kubenswrapper[4833]: I0217 13:46:58.113435 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:58 crc kubenswrapper[4833]: I0217 13:46:58.113462 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:58 crc kubenswrapper[4833]: I0217 13:46:58.113486 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:58Z","lastTransitionTime":"2026-02-17T13:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:58 crc kubenswrapper[4833]: I0217 13:46:58.216897 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:58 crc kubenswrapper[4833]: I0217 13:46:58.216967 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:58 crc kubenswrapper[4833]: I0217 13:46:58.216978 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:58 crc kubenswrapper[4833]: I0217 13:46:58.217002 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:58 crc kubenswrapper[4833]: I0217 13:46:58.217015 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:58Z","lastTransitionTime":"2026-02-17T13:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:58 crc kubenswrapper[4833]: I0217 13:46:58.319374 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:58 crc kubenswrapper[4833]: I0217 13:46:58.319408 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:58 crc kubenswrapper[4833]: I0217 13:46:58.319418 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:58 crc kubenswrapper[4833]: I0217 13:46:58.319435 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:58 crc kubenswrapper[4833]: I0217 13:46:58.319448 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:58Z","lastTransitionTime":"2026-02-17T13:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:58 crc kubenswrapper[4833]: I0217 13:46:58.422383 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:58 crc kubenswrapper[4833]: I0217 13:46:58.422431 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:58 crc kubenswrapper[4833]: I0217 13:46:58.422447 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:58 crc kubenswrapper[4833]: I0217 13:46:58.422472 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:58 crc kubenswrapper[4833]: I0217 13:46:58.422488 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:58Z","lastTransitionTime":"2026-02-17T13:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:58 crc kubenswrapper[4833]: I0217 13:46:58.525383 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:58 crc kubenswrapper[4833]: I0217 13:46:58.525421 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:58 crc kubenswrapper[4833]: I0217 13:46:58.525431 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:58 crc kubenswrapper[4833]: I0217 13:46:58.525448 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:58 crc kubenswrapper[4833]: I0217 13:46:58.525459 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:58Z","lastTransitionTime":"2026-02-17T13:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:58 crc kubenswrapper[4833]: I0217 13:46:58.620360 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:46:58 crc kubenswrapper[4833]: I0217 13:46:58.620412 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:46:58 crc kubenswrapper[4833]: I0217 13:46:58.620426 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:46:58 crc kubenswrapper[4833]: I0217 13:46:58.620450 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:46:58 crc kubenswrapper[4833]: I0217 13:46:58.620464 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:46:58Z","lastTransitionTime":"2026-02-17T13:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:46:58 crc kubenswrapper[4833]: I0217 13:46:58.695525 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-76v9x"] Feb 17 13:46:58 crc kubenswrapper[4833]: I0217 13:46:58.696304 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-76v9x" Feb 17 13:46:58 crc kubenswrapper[4833]: I0217 13:46:58.699090 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 17 13:46:58 crc kubenswrapper[4833]: I0217 13:46:58.699580 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 17 13:46:58 crc kubenswrapper[4833]: I0217 13:46:58.699667 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 17 13:46:58 crc kubenswrapper[4833]: I0217 13:46:58.699699 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 17 13:46:58 crc kubenswrapper[4833]: I0217 13:46:58.738733 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-5gxjd" podStartSLOduration=88.738692289 podStartE2EDuration="1m28.738692289s" podCreationTimestamp="2026-02-17 13:45:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:46:58.719137947 +0000 UTC m=+108.354237420" watchObservedRunningTime="2026-02-17 13:46:58.738692289 +0000 UTC m=+108.373791742" Feb 17 13:46:58 crc kubenswrapper[4833]: I0217 13:46:58.739588 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xwfsh" podStartSLOduration=87.739551002 podStartE2EDuration="1m27.739551002s" podCreationTimestamp="2026-02-17 13:45:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:46:58.739275385 +0000 UTC m=+108.374374818" watchObservedRunningTime="2026-02-17 13:46:58.739551002 +0000 UTC m=+108.374650445" Feb 17 13:46:58 crc kubenswrapper[4833]: I0217 13:46:58.798887 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=89.798863627 podStartE2EDuration="1m29.798863627s" podCreationTimestamp="2026-02-17 13:45:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:46:58.798801875 +0000 UTC m=+108.433901318" watchObservedRunningTime="2026-02-17 13:46:58.798863627 +0000 UTC m=+108.433963060" Feb 17 13:46:58 crc kubenswrapper[4833]: I0217 13:46:58.799187 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=21.799184185 podStartE2EDuration="21.799184185s" podCreationTimestamp="2026-02-17 13:46:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:46:58.777237258 +0000 UTC m=+108.412336721" watchObservedRunningTime="2026-02-17 13:46:58.799184185 +0000 UTC m=+108.434283608" Feb 17 13:46:58 crc kubenswrapper[4833]: I0217 13:46:58.824981 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abb982a6-c95d-4ed0-b253-dc5e8b995ea8-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-76v9x\" (UID: \"abb982a6-c95d-4ed0-b253-dc5e8b995ea8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-76v9x" Feb 17 13:46:58 crc kubenswrapper[4833]: I0217 13:46:58.825107 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/abb982a6-c95d-4ed0-b253-dc5e8b995ea8-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-76v9x\" (UID: \"abb982a6-c95d-4ed0-b253-dc5e8b995ea8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-76v9x" Feb 17 13:46:58 crc kubenswrapper[4833]: I0217 13:46:58.825151 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/abb982a6-c95d-4ed0-b253-dc5e8b995ea8-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-76v9x\" (UID: \"abb982a6-c95d-4ed0-b253-dc5e8b995ea8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-76v9x" Feb 17 13:46:58 crc kubenswrapper[4833]: I0217 13:46:58.825208 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/abb982a6-c95d-4ed0-b253-dc5e8b995ea8-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-76v9x\" (UID: \"abb982a6-c95d-4ed0-b253-dc5e8b995ea8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-76v9x" Feb 17 13:46:58 crc kubenswrapper[4833]: I0217 13:46:58.825298 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/abb982a6-c95d-4ed0-b253-dc5e8b995ea8-service-ca\") pod \"cluster-version-operator-5c965bbfc6-76v9x\" (UID: \"abb982a6-c95d-4ed0-b253-dc5e8b995ea8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-76v9x" Feb 17 13:46:58 crc kubenswrapper[4833]: I0217 13:46:58.861902 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-vx9xx" podStartSLOduration=89.861869092 podStartE2EDuration="1m29.861869092s" podCreationTimestamp="2026-02-17 13:45:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:46:58.847712226 +0000 UTC m=+108.482811669" watchObservedRunningTime="2026-02-17 13:46:58.861869092 +0000 UTC m=+108.496968545" Feb 17 13:46:58 crc kubenswrapper[4833]: I0217 13:46:58.862099 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" podStartSLOduration=88.862088758 podStartE2EDuration="1m28.862088758s" podCreationTimestamp="2026-02-17 13:45:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:46:58.861821831 +0000 UTC m=+108.496921264" watchObservedRunningTime="2026-02-17 13:46:58.862088758 +0000 UTC m=+108.497188211" Feb 17 13:46:58 crc kubenswrapper[4833]: I0217 13:46:58.902849 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-wxvlq" podStartSLOduration=88.902824027 podStartE2EDuration="1m28.902824027s" podCreationTimestamp="2026-02-17 13:45:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:46:58.885617108 +0000 UTC m=+108.520716561" watchObservedRunningTime="2026-02-17 13:46:58.902824027 +0000 UTC m=+108.537923450" Feb 17 13:46:58 crc kubenswrapper[4833]: I0217 13:46:58.926278 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/abb982a6-c95d-4ed0-b253-dc5e8b995ea8-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-76v9x\" (UID: \"abb982a6-c95d-4ed0-b253-dc5e8b995ea8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-76v9x" Feb 17 13:46:58 crc kubenswrapper[4833]: I0217 13:46:58.926378 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/abb982a6-c95d-4ed0-b253-dc5e8b995ea8-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-76v9x\" (UID: \"abb982a6-c95d-4ed0-b253-dc5e8b995ea8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-76v9x" Feb 17 13:46:58 crc kubenswrapper[4833]: I0217 13:46:58.926445 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/abb982a6-c95d-4ed0-b253-dc5e8b995ea8-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-76v9x\" (UID: \"abb982a6-c95d-4ed0-b253-dc5e8b995ea8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-76v9x" Feb 17 13:46:58 crc kubenswrapper[4833]: I0217 13:46:58.926469 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/abb982a6-c95d-4ed0-b253-dc5e8b995ea8-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-76v9x\" (UID: \"abb982a6-c95d-4ed0-b253-dc5e8b995ea8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-76v9x" Feb 17 13:46:58 crc kubenswrapper[4833]: I0217 13:46:58.926528 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/abb982a6-c95d-4ed0-b253-dc5e8b995ea8-service-ca\") pod \"cluster-version-operator-5c965bbfc6-76v9x\" (UID: \"abb982a6-c95d-4ed0-b253-dc5e8b995ea8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-76v9x" Feb 17 13:46:58 crc kubenswrapper[4833]: I0217 13:46:58.926469 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/abb982a6-c95d-4ed0-b253-dc5e8b995ea8-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-76v9x\" (UID: \"abb982a6-c95d-4ed0-b253-dc5e8b995ea8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-76v9x" Feb 17 13:46:58 crc kubenswrapper[4833]: I0217 13:46:58.926571 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abb982a6-c95d-4ed0-b253-dc5e8b995ea8-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-76v9x\" (UID: \"abb982a6-c95d-4ed0-b253-dc5e8b995ea8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-76v9x" Feb 17 13:46:58 crc kubenswrapper[4833]: I0217 13:46:58.927623 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/abb982a6-c95d-4ed0-b253-dc5e8b995ea8-service-ca\") pod \"cluster-version-operator-5c965bbfc6-76v9x\" (UID: \"abb982a6-c95d-4ed0-b253-dc5e8b995ea8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-76v9x" Feb 17 13:46:58 crc kubenswrapper[4833]: I0217 13:46:58.936296 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abb982a6-c95d-4ed0-b253-dc5e8b995ea8-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-76v9x\" (UID: \"abb982a6-c95d-4ed0-b253-dc5e8b995ea8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-76v9x" Feb 17 13:46:58 crc kubenswrapper[4833]: I0217 13:46:58.951721 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/abb982a6-c95d-4ed0-b253-dc5e8b995ea8-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-76v9x\" (UID: \"abb982a6-c95d-4ed0-b253-dc5e8b995ea8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-76v9x" Feb 17 13:46:58 crc kubenswrapper[4833]: I0217 13:46:58.977460 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=88.977437728 podStartE2EDuration="1m28.977437728s" podCreationTimestamp="2026-02-17 13:45:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:46:58.975611688 +0000 UTC m=+108.610711131" watchObservedRunningTime="2026-02-17 13:46:58.977437728 +0000 UTC m=+108.612537161" Feb 17 13:46:59 crc kubenswrapper[4833]: I0217 13:46:59.013313 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-76v9x" Feb 17 13:46:59 crc kubenswrapper[4833]: I0217 13:46:59.023481 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=89.02346427 podStartE2EDuration="1m29.02346427s" podCreationTimestamp="2026-02-17 13:45:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:46:59.001969995 +0000 UTC m=+108.637069448" watchObservedRunningTime="2026-02-17 13:46:59.02346427 +0000 UTC m=+108.658563713" Feb 17 13:46:59 crc kubenswrapper[4833]: I0217 13:46:59.024146 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=54.024143199 podStartE2EDuration="54.024143199s" podCreationTimestamp="2026-02-17 13:46:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:46:59.02308524 +0000 UTC m=+108.658184673" watchObservedRunningTime="2026-02-17 13:46:59.024143199 +0000 UTC m=+108.659242632" Feb 17 13:46:59 crc kubenswrapper[4833]: I0217 13:46:59.041785 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:46:59 crc kubenswrapper[4833]: E0217 13:46:59.041919 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:46:59 crc kubenswrapper[4833]: I0217 13:46:59.057800 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 22:50:55.301234429 +0000 UTC Feb 17 13:46:59 crc kubenswrapper[4833]: I0217 13:46:59.057866 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 17 13:46:59 crc kubenswrapper[4833]: I0217 13:46:59.070226 4833 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 17 13:46:59 crc kubenswrapper[4833]: I0217 13:46:59.091835 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-wlt4c" podStartSLOduration=89.09180882 podStartE2EDuration="1m29.09180882s" podCreationTimestamp="2026-02-17 13:45:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:46:59.067370755 +0000 UTC m=+108.702470188" watchObservedRunningTime="2026-02-17 13:46:59.09180882 +0000 UTC m=+108.726908263" Feb 17 13:46:59 crc kubenswrapper[4833]: I0217 13:46:59.609296 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-76v9x" event={"ID":"abb982a6-c95d-4ed0-b253-dc5e8b995ea8","Type":"ContainerStarted","Data":"1cbd0de1e183ed79e304bdbfed75f4cf2d45e16fea31982506618134709061bf"} Feb 17 13:46:59 crc kubenswrapper[4833]: I0217 13:46:59.609365 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-76v9x" event={"ID":"abb982a6-c95d-4ed0-b253-dc5e8b995ea8","Type":"ContainerStarted","Data":"efb0196f64229e963e20bee7e79a9e471c61caab7275aa0db23bf56118d6284d"} Feb 17 13:46:59 crc kubenswrapper[4833]: I0217 13:46:59.630094 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-76v9x" podStartSLOduration=89.6300323 podStartE2EDuration="1m29.6300323s" podCreationTimestamp="2026-02-17 13:45:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:46:59.628732104 +0000 UTC m=+109.263831567" watchObservedRunningTime="2026-02-17 13:46:59.6300323 +0000 UTC m=+109.265131743" Feb 17 13:47:00 crc kubenswrapper[4833]: I0217 13:47:00.040544 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:47:00 crc kubenswrapper[4833]: I0217 13:47:00.040627 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b7xf" Feb 17 13:47:00 crc kubenswrapper[4833]: I0217 13:47:00.040627 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:47:00 crc kubenswrapper[4833]: E0217 13:47:00.040714 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:47:00 crc kubenswrapper[4833]: E0217 13:47:00.040803 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:47:00 crc kubenswrapper[4833]: E0217 13:47:00.040902 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b7xf" podUID="892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c" Feb 17 13:47:01 crc kubenswrapper[4833]: I0217 13:47:01.040667 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:47:01 crc kubenswrapper[4833]: E0217 13:47:01.042872 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:47:02 crc kubenswrapper[4833]: I0217 13:47:02.041318 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:47:02 crc kubenswrapper[4833]: I0217 13:47:02.041313 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b7xf" Feb 17 13:47:02 crc kubenswrapper[4833]: I0217 13:47:02.041345 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:47:02 crc kubenswrapper[4833]: E0217 13:47:02.041554 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b7xf" podUID="892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c" Feb 17 13:47:02 crc kubenswrapper[4833]: E0217 13:47:02.041660 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:47:02 crc kubenswrapper[4833]: E0217 13:47:02.041717 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:47:03 crc kubenswrapper[4833]: I0217 13:47:03.040850 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:47:03 crc kubenswrapper[4833]: E0217 13:47:03.041288 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:47:04 crc kubenswrapper[4833]: I0217 13:47:04.040702 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:47:04 crc kubenswrapper[4833]: I0217 13:47:04.040740 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:47:04 crc kubenswrapper[4833]: I0217 13:47:04.040881 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b7xf" Feb 17 13:47:04 crc kubenswrapper[4833]: E0217 13:47:04.041003 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:47:04 crc kubenswrapper[4833]: E0217 13:47:04.041134 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:47:04 crc kubenswrapper[4833]: E0217 13:47:04.041231 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b7xf" podUID="892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c" Feb 17 13:47:05 crc kubenswrapper[4833]: I0217 13:47:05.040968 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:47:05 crc kubenswrapper[4833]: E0217 13:47:05.041204 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:47:06 crc kubenswrapper[4833]: I0217 13:47:06.040685 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:47:06 crc kubenswrapper[4833]: I0217 13:47:06.040743 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b7xf" Feb 17 13:47:06 crc kubenswrapper[4833]: I0217 13:47:06.040706 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:47:06 crc kubenswrapper[4833]: E0217 13:47:06.040914 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:47:06 crc kubenswrapper[4833]: E0217 13:47:06.041090 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b7xf" podUID="892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c" Feb 17 13:47:06 crc kubenswrapper[4833]: E0217 13:47:06.041219 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:47:07 crc kubenswrapper[4833]: I0217 13:47:07.040743 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:47:07 crc kubenswrapper[4833]: E0217 13:47:07.040871 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:47:07 crc kubenswrapper[4833]: I0217 13:47:07.042333 4833 scope.go:117] "RemoveContainer" containerID="b53fbee4db34f53cf5d58fa925e793cae75e34d7aeac4b10518659e2b89e177b" Feb 17 13:47:07 crc kubenswrapper[4833]: I0217 13:47:07.638245 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7r9gt_72c5918a-056f-446c-b138-a1be7140a5b0/ovnkube-controller/3.log" Feb 17 13:47:07 crc kubenswrapper[4833]: I0217 13:47:07.640982 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" event={"ID":"72c5918a-056f-446c-b138-a1be7140a5b0","Type":"ContainerStarted","Data":"a2614c0cf9a6f0902ef7c50122eea59027e4caf925e68609637d981b8a9837a9"} Feb 17 13:47:07 crc kubenswrapper[4833]: I0217 13:47:07.641333 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" Feb 17 13:47:07 crc kubenswrapper[4833]: I0217 13:47:07.642628 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wlt4c_a3b8d3ca-f768-4129-9c1a-b4866dd852d4/kube-multus/1.log" Feb 17 13:47:07 crc kubenswrapper[4833]: I0217 13:47:07.643288 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wlt4c_a3b8d3ca-f768-4129-9c1a-b4866dd852d4/kube-multus/0.log" Feb 17 13:47:07 crc kubenswrapper[4833]: I0217 13:47:07.643366 4833 generic.go:334] "Generic (PLEG): container finished" podID="a3b8d3ca-f768-4129-9c1a-b4866dd852d4" containerID="efd76798e54bfcbad6d3a5f07396fe8579adcdb3d5bab3c303a9d31ad242e830" exitCode=1 Feb 17 13:47:07 crc kubenswrapper[4833]: I0217 13:47:07.643411 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-wlt4c" event={"ID":"a3b8d3ca-f768-4129-9c1a-b4866dd852d4","Type":"ContainerDied","Data":"efd76798e54bfcbad6d3a5f07396fe8579adcdb3d5bab3c303a9d31ad242e830"} Feb 17 13:47:07 crc kubenswrapper[4833]: I0217 13:47:07.643457 4833 scope.go:117] "RemoveContainer" containerID="26d2bf39d36b6e8998e442f36f84198a363d7f22f56a05957a4a242459e6ff8b" Feb 17 13:47:07 crc kubenswrapper[4833]: I0217 13:47:07.643893 4833 scope.go:117] "RemoveContainer" containerID="efd76798e54bfcbad6d3a5f07396fe8579adcdb3d5bab3c303a9d31ad242e830" Feb 17 13:47:07 crc kubenswrapper[4833]: E0217 13:47:07.644084 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-wlt4c_openshift-multus(a3b8d3ca-f768-4129-9c1a-b4866dd852d4)\"" pod="openshift-multus/multus-wlt4c" podUID="a3b8d3ca-f768-4129-9c1a-b4866dd852d4" Feb 17 13:47:07 crc kubenswrapper[4833]: I0217 13:47:07.694253 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" podStartSLOduration=96.694229563 podStartE2EDuration="1m36.694229563s" podCreationTimestamp="2026-02-17 13:45:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:47:07.675693648 +0000 UTC m=+117.310793141" watchObservedRunningTime="2026-02-17 13:47:07.694229563 +0000 UTC m=+117.329329016" Feb 17 13:47:07 crc kubenswrapper[4833]: I0217 13:47:07.958898 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-4b7xf"] Feb 17 13:47:07 crc kubenswrapper[4833]: I0217 13:47:07.959358 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b7xf" Feb 17 13:47:07 crc kubenswrapper[4833]: E0217 13:47:07.959473 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b7xf" podUID="892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c" Feb 17 13:47:08 crc kubenswrapper[4833]: I0217 13:47:08.040930 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:47:08 crc kubenswrapper[4833]: E0217 13:47:08.041022 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:47:08 crc kubenswrapper[4833]: I0217 13:47:08.041326 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:47:08 crc kubenswrapper[4833]: E0217 13:47:08.041397 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:47:08 crc kubenswrapper[4833]: I0217 13:47:08.649124 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wlt4c_a3b8d3ca-f768-4129-9c1a-b4866dd852d4/kube-multus/1.log" Feb 17 13:47:09 crc kubenswrapper[4833]: I0217 13:47:09.040778 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:47:09 crc kubenswrapper[4833]: E0217 13:47:09.040942 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:47:10 crc kubenswrapper[4833]: I0217 13:47:10.040829 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:47:10 crc kubenswrapper[4833]: I0217 13:47:10.040934 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b7xf" Feb 17 13:47:10 crc kubenswrapper[4833]: E0217 13:47:10.041003 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:47:10 crc kubenswrapper[4833]: I0217 13:47:10.040855 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:47:10 crc kubenswrapper[4833]: E0217 13:47:10.041149 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b7xf" podUID="892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c" Feb 17 13:47:10 crc kubenswrapper[4833]: E0217 13:47:10.041321 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:47:11 crc kubenswrapper[4833]: E0217 13:47:11.029081 4833 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 17 13:47:11 crc kubenswrapper[4833]: I0217 13:47:11.041249 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:47:11 crc kubenswrapper[4833]: E0217 13:47:11.042376 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:47:11 crc kubenswrapper[4833]: E0217 13:47:11.128179 4833 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 13:47:12 crc kubenswrapper[4833]: I0217 13:47:12.040526 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:47:12 crc kubenswrapper[4833]: I0217 13:47:12.040643 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:47:12 crc kubenswrapper[4833]: E0217 13:47:12.041155 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:47:12 crc kubenswrapper[4833]: I0217 13:47:12.040692 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b7xf" Feb 17 13:47:12 crc kubenswrapper[4833]: E0217 13:47:12.040993 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:47:12 crc kubenswrapper[4833]: E0217 13:47:12.041388 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b7xf" podUID="892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c" Feb 17 13:47:13 crc kubenswrapper[4833]: I0217 13:47:13.040713 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:47:13 crc kubenswrapper[4833]: E0217 13:47:13.040901 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:47:14 crc kubenswrapper[4833]: I0217 13:47:14.040805 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:47:14 crc kubenswrapper[4833]: I0217 13:47:14.040860 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b7xf" Feb 17 13:47:14 crc kubenswrapper[4833]: I0217 13:47:14.040885 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:47:14 crc kubenswrapper[4833]: E0217 13:47:14.040972 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:47:14 crc kubenswrapper[4833]: E0217 13:47:14.041092 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:47:14 crc kubenswrapper[4833]: E0217 13:47:14.041180 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b7xf" podUID="892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c" Feb 17 13:47:15 crc kubenswrapper[4833]: I0217 13:47:15.041616 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:47:15 crc kubenswrapper[4833]: E0217 13:47:15.041861 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:47:16 crc kubenswrapper[4833]: I0217 13:47:16.040683 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:47:16 crc kubenswrapper[4833]: I0217 13:47:16.040790 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b7xf" Feb 17 13:47:16 crc kubenswrapper[4833]: E0217 13:47:16.040957 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:47:16 crc kubenswrapper[4833]: E0217 13:47:16.041274 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b7xf" podUID="892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c" Feb 17 13:47:16 crc kubenswrapper[4833]: I0217 13:47:16.040711 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:47:16 crc kubenswrapper[4833]: E0217 13:47:16.041407 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:47:16 crc kubenswrapper[4833]: E0217 13:47:16.130194 4833 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 13:47:17 crc kubenswrapper[4833]: I0217 13:47:17.041468 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:47:17 crc kubenswrapper[4833]: E0217 13:47:17.041692 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:47:18 crc kubenswrapper[4833]: I0217 13:47:18.041119 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b7xf" Feb 17 13:47:18 crc kubenswrapper[4833]: I0217 13:47:18.041347 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:47:18 crc kubenswrapper[4833]: E0217 13:47:18.041375 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b7xf" podUID="892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c" Feb 17 13:47:18 crc kubenswrapper[4833]: I0217 13:47:18.041424 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:47:18 crc kubenswrapper[4833]: E0217 13:47:18.041559 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:47:18 crc kubenswrapper[4833]: E0217 13:47:18.041915 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:47:19 crc kubenswrapper[4833]: I0217 13:47:19.041339 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:47:19 crc kubenswrapper[4833]: E0217 13:47:19.041569 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:47:20 crc kubenswrapper[4833]: I0217 13:47:20.041552 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:47:20 crc kubenswrapper[4833]: I0217 13:47:20.041672 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b7xf" Feb 17 13:47:20 crc kubenswrapper[4833]: I0217 13:47:20.041590 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:47:20 crc kubenswrapper[4833]: E0217 13:47:20.041755 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:47:20 crc kubenswrapper[4833]: E0217 13:47:20.041938 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b7xf" podUID="892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c" Feb 17 13:47:20 crc kubenswrapper[4833]: E0217 13:47:20.042102 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:47:21 crc kubenswrapper[4833]: I0217 13:47:21.041236 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:47:21 crc kubenswrapper[4833]: E0217 13:47:21.044083 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:47:21 crc kubenswrapper[4833]: E0217 13:47:21.131106 4833 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 13:47:22 crc kubenswrapper[4833]: I0217 13:47:22.041232 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b7xf" Feb 17 13:47:22 crc kubenswrapper[4833]: I0217 13:47:22.041299 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:47:22 crc kubenswrapper[4833]: E0217 13:47:22.041542 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b7xf" podUID="892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c" Feb 17 13:47:22 crc kubenswrapper[4833]: I0217 13:47:22.041629 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:47:22 crc kubenswrapper[4833]: E0217 13:47:22.041631 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:47:22 crc kubenswrapper[4833]: E0217 13:47:22.042121 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:47:22 crc kubenswrapper[4833]: I0217 13:47:22.042228 4833 scope.go:117] "RemoveContainer" containerID="efd76798e54bfcbad6d3a5f07396fe8579adcdb3d5bab3c303a9d31ad242e830" Feb 17 13:47:22 crc kubenswrapper[4833]: I0217 13:47:22.706611 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wlt4c_a3b8d3ca-f768-4129-9c1a-b4866dd852d4/kube-multus/1.log" Feb 17 13:47:22 crc kubenswrapper[4833]: I0217 13:47:22.706702 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-wlt4c" event={"ID":"a3b8d3ca-f768-4129-9c1a-b4866dd852d4","Type":"ContainerStarted","Data":"11b83835c273f377e2c85db9ff37901aa2d246ce6673d32ff9925526757a98b3"} Feb 17 13:47:23 crc kubenswrapper[4833]: I0217 13:47:23.041508 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:47:23 crc kubenswrapper[4833]: E0217 13:47:23.041887 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:47:24 crc kubenswrapper[4833]: I0217 13:47:24.041264 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:47:24 crc kubenswrapper[4833]: I0217 13:47:24.041338 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:47:24 crc kubenswrapper[4833]: I0217 13:47:24.041391 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b7xf" Feb 17 13:47:24 crc kubenswrapper[4833]: E0217 13:47:24.042110 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:47:24 crc kubenswrapper[4833]: E0217 13:47:24.042171 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b7xf" podUID="892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c" Feb 17 13:47:24 crc kubenswrapper[4833]: E0217 13:47:24.041811 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:47:25 crc kubenswrapper[4833]: I0217 13:47:25.041684 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:47:25 crc kubenswrapper[4833]: E0217 13:47:25.041910 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:47:26 crc kubenswrapper[4833]: I0217 13:47:26.041167 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:47:26 crc kubenswrapper[4833]: I0217 13:47:26.041212 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:47:26 crc kubenswrapper[4833]: E0217 13:47:26.041374 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:47:26 crc kubenswrapper[4833]: I0217 13:47:26.041437 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b7xf" Feb 17 13:47:26 crc kubenswrapper[4833]: E0217 13:47:26.041639 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:47:26 crc kubenswrapper[4833]: E0217 13:47:26.041705 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b7xf" podUID="892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c" Feb 17 13:47:26 crc kubenswrapper[4833]: I0217 13:47:26.120996 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" Feb 17 13:47:27 crc kubenswrapper[4833]: I0217 13:47:27.041212 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:47:27 crc kubenswrapper[4833]: I0217 13:47:27.045939 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 17 13:47:27 crc kubenswrapper[4833]: I0217 13:47:27.046221 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 17 13:47:28 crc kubenswrapper[4833]: I0217 13:47:28.041430 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:47:28 crc kubenswrapper[4833]: I0217 13:47:28.041479 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b7xf" Feb 17 13:47:28 crc kubenswrapper[4833]: I0217 13:47:28.041445 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:47:28 crc kubenswrapper[4833]: I0217 13:47:28.044310 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 17 13:47:28 crc kubenswrapper[4833]: I0217 13:47:28.044310 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 17 13:47:28 crc kubenswrapper[4833]: I0217 13:47:28.046358 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 17 13:47:28 crc kubenswrapper[4833]: I0217 13:47:28.046589 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.652512 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.712996 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-stnj2"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.713909 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-stnj2" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.714347 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-btl28"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.714874 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-btl28" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.715617 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-xdxhg"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.716509 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xdxhg" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.716631 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-g2gvf"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.717189 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g2gvf" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.717763 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-4wnpt"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.718462 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-4wnpt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.719759 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2n6pb"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.720123 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-hjdrd"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.720569 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hjdrd" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.720987 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2n6pb" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.722552 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.722746 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.722891 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.723130 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.723438 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.723617 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-b8pbt"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.723732 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.723964 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.724283 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-b8pbt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.725265 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.725782 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.726283 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.726446 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.726577 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.726713 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.726850 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.727011 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.727177 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.733070 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-jnj5g"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.733445 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-wc6g2"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.733866 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-wc6g2" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.734311 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-jnj5g" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.736299 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.737718 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nsksg"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.738191 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nsksg" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.738712 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.738783 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.738875 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.738998 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.738896 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.739069 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.738887 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.739364 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.739426 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.739502 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.739676 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.740134 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.743323 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.743644 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.748419 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-xwz2m"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.752085 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.775448 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-sgpfh"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.775533 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.776831 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-xwz2m" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.777920 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4dgk\" (UniqueName: \"kubernetes.io/projected/12eba345-07f7-4472-8a45-d0bd87317b0a-kube-api-access-f4dgk\") pod \"cluster-samples-operator-665b6dd947-nsksg\" (UID: \"12eba345-07f7-4472-8a45-d0bd87317b0a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nsksg" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.778071 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/12eba345-07f7-4472-8a45-d0bd87317b0a-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-nsksg\" (UID: \"12eba345-07f7-4472-8a45-d0bd87317b0a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nsksg" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.779406 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.779494 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.779508 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.779618 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.779640 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.779654 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.779674 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.779439 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.779784 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.779842 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.779870 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.779876 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.779962 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.780020 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.780064 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.780288 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.781707 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.781845 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.781957 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.782056 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.782139 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.782240 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.782319 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.782379 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.782497 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.782518 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.782714 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.782819 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.782895 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.782981 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.782720 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.782819 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.783148 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.783443 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.783753 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.783919 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-sgpfh" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.784263 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.784392 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r7lt2"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.784874 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r7lt2" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.789806 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.793659 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.794314 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.794573 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.795384 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.796086 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.796317 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.797345 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.797529 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.797762 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.797911 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.797993 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.798004 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.798212 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.798402 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.798562 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.801124 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.801496 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-h2rpb"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.802003 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-knm6z"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.802154 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.802427 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-knm6z" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.802996 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-h2rpb" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.804304 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-87pnl"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.804735 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.804932 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.804976 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-87pnl" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.808166 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-chdnk"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.814791 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-27vkt"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.816331 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.819300 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-chdnk" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.821115 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.821394 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.821638 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.822314 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.830586 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.831002 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.831201 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.831230 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pc9vj"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.831660 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pc9vj" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.831931 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-27vkt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.832381 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.832549 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.835126 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.837596 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.837726 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xj6dt"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.838277 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gvzml"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.838555 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-slqbd"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.838888 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vtrsq"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.838934 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xj6dt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.838901 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-slqbd" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.839255 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gvzml" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.839721 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vtrsq" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.842301 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.842630 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-dmmfm"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.845758 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xdsld"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.846134 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xdsld" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.846333 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-dmmfm" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.846607 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-5d2h9"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.847083 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-fps2n"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.847269 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5d2h9" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.847401 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-fps2n" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.848159 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.848390 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-l9kx4"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.851591 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-lsrj4"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.851972 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7r5vw"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.852327 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7r5vw" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.852429 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l9kx4" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.852550 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6nkfm"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.852565 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-lsrj4" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.853071 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6nkfm" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.853796 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.854094 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-stnj2"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.855611 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-2mlww"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.856333 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-2mlww" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.856372 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.858026 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-g2gvf"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.858245 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-7dwhv"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.858780 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-7dwhv" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.859064 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cvhc2"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.860278 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cvhc2" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.862006 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-xdxhg"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.862500 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-znglf"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.863403 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-znglf" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.864687 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-wc6g2"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.865738 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-2ddlx"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.866182 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.867138 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-jnj5g"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.874422 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-4wnpt"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.879219 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b617c42f-c749-41e5-a305-692a4c631656-service-ca\") pod \"console-f9d7485db-xwz2m\" (UID: \"b617c42f-c749-41e5-a305-692a4c631656\") " pod="openshift-console/console-f9d7485db-xwz2m" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.879253 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b617c42f-c749-41e5-a305-692a4c631656-oauth-serving-cert\") pod \"console-f9d7485db-xwz2m\" (UID: \"b617c42f-c749-41e5-a305-692a4c631656\") " pod="openshift-console/console-f9d7485db-xwz2m" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.879308 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78qdg\" (UniqueName: \"kubernetes.io/projected/22bea415-75b9-4f36-a531-62617ed244c8-kube-api-access-78qdg\") pod \"kube-storage-version-migrator-operator-b67b599dd-vtrsq\" (UID: \"22bea415-75b9-4f36-a531-62617ed244c8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vtrsq" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.879349 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d5560b0e-f2b1-469b-b989-f2abce8e9b8b-etcd-client\") pod \"apiserver-76f77b778f-b8pbt\" (UID: \"d5560b0e-f2b1-469b-b989-f2abce8e9b8b\") " pod="openshift-apiserver/apiserver-76f77b778f-b8pbt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.879392 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87a6deb5-a486-4485-9e62-e9eb946878ca-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-jnj5g\" (UID: \"87a6deb5-a486-4485-9e62-e9eb946878ca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jnj5g" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.879416 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77fd0717-6d6c-46cf-a19a-29dce58a7176-config\") pod \"openshift-apiserver-operator-796bbdcf4f-2n6pb\" (UID: \"77fd0717-6d6c-46cf-a19a-29dce58a7176\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2n6pb" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.879435 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5560b0e-f2b1-469b-b989-f2abce8e9b8b-trusted-ca-bundle\") pod \"apiserver-76f77b778f-b8pbt\" (UID: \"d5560b0e-f2b1-469b-b989-f2abce8e9b8b\") " pod="openshift-apiserver/apiserver-76f77b778f-b8pbt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.879464 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f1cb67f7-508e-44b5-9fa7-bb8f811812f8-audit-policies\") pod \"apiserver-7bbb656c7d-xdxhg\" (UID: \"f1cb67f7-508e-44b5-9fa7-bb8f811812f8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xdxhg" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.879486 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d4005d2-d765-4a8e-9b85-8c49d8238995-config\") pod \"controller-manager-879f6c89f-btl28\" (UID: \"3d4005d2-d765-4a8e-9b85-8c49d8238995\") " pod="openshift-controller-manager/controller-manager-879f6c89f-btl28" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.879584 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75dw7\" (UniqueName: \"kubernetes.io/projected/77fd0717-6d6c-46cf-a19a-29dce58a7176-kube-api-access-75dw7\") pod \"openshift-apiserver-operator-796bbdcf4f-2n6pb\" (UID: \"77fd0717-6d6c-46cf-a19a-29dce58a7176\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2n6pb" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.879627 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c1e5bc60-f7d5-436d-9298-3099adb6bc0a-images\") pod \"machine-api-operator-5694c8668f-stnj2\" (UID: \"c1e5bc60-f7d5-436d-9298-3099adb6bc0a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-stnj2" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.879652 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqz5v\" (UniqueName: \"kubernetes.io/projected/d5560b0e-f2b1-469b-b989-f2abce8e9b8b-kube-api-access-pqz5v\") pod \"apiserver-76f77b778f-b8pbt\" (UID: \"d5560b0e-f2b1-469b-b989-f2abce8e9b8b\") " pod="openshift-apiserver/apiserver-76f77b778f-b8pbt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.879677 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b617c42f-c749-41e5-a305-692a4c631656-console-config\") pod \"console-f9d7485db-xwz2m\" (UID: \"b617c42f-c749-41e5-a305-692a4c631656\") " pod="openshift-console/console-f9d7485db-xwz2m" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.879707 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/af0f5f19-3c55-45a1-9a8f-66e55fd46683-machine-approver-tls\") pod \"machine-approver-56656f9798-hjdrd\" (UID: \"af0f5f19-3c55-45a1-9a8f-66e55fd46683\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hjdrd" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.879730 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d5560b0e-f2b1-469b-b989-f2abce8e9b8b-audit\") pod \"apiserver-76f77b778f-b8pbt\" (UID: \"d5560b0e-f2b1-469b-b989-f2abce8e9b8b\") " pod="openshift-apiserver/apiserver-76f77b778f-b8pbt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.879749 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.879991 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f1cb67f7-508e-44b5-9fa7-bb8f811812f8-audit-dir\") pod \"apiserver-7bbb656c7d-xdxhg\" (UID: \"f1cb67f7-508e-44b5-9fa7-bb8f811812f8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xdxhg" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.880017 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a441724f-e9ab-46fe-8bb3-9564e6b87e7e-config\") pod \"route-controller-manager-6576b87f9c-g2gvf\" (UID: \"a441724f-e9ab-46fe-8bb3-9564e6b87e7e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g2gvf" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.880079 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87a6deb5-a486-4485-9e62-e9eb946878ca-config\") pod \"authentication-operator-69f744f599-jnj5g\" (UID: \"87a6deb5-a486-4485-9e62-e9eb946878ca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jnj5g" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.880102 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22bea415-75b9-4f36-a531-62617ed244c8-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-vtrsq\" (UID: \"22bea415-75b9-4f36-a531-62617ed244c8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vtrsq" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.880130 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-js4sk\" (UniqueName: \"kubernetes.io/projected/c1e5bc60-f7d5-436d-9298-3099adb6bc0a-kube-api-access-js4sk\") pod \"machine-api-operator-5694c8668f-stnj2\" (UID: \"c1e5bc60-f7d5-436d-9298-3099adb6bc0a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-stnj2" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.880150 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b617c42f-c749-41e5-a305-692a4c631656-console-oauth-config\") pod \"console-f9d7485db-xwz2m\" (UID: \"b617c42f-c749-41e5-a305-692a4c631656\") " pod="openshift-console/console-f9d7485db-xwz2m" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.880326 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzqdl\" (UniqueName: \"kubernetes.io/projected/b617c42f-c749-41e5-a305-692a4c631656-kube-api-access-dzqdl\") pod \"console-f9d7485db-xwz2m\" (UID: \"b617c42f-c749-41e5-a305-692a4c631656\") " pod="openshift-console/console-f9d7485db-xwz2m" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.880403 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f1cb67f7-508e-44b5-9fa7-bb8f811812f8-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-xdxhg\" (UID: \"f1cb67f7-508e-44b5-9fa7-bb8f811812f8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xdxhg" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.880484 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nngdf\" (UniqueName: \"kubernetes.io/projected/f1cb67f7-508e-44b5-9fa7-bb8f811812f8-kube-api-access-nngdf\") pod \"apiserver-7bbb656c7d-xdxhg\" (UID: \"f1cb67f7-508e-44b5-9fa7-bb8f811812f8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xdxhg" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.880519 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5560b0e-f2b1-469b-b989-f2abce8e9b8b-serving-cert\") pod \"apiserver-76f77b778f-b8pbt\" (UID: \"d5560b0e-f2b1-469b-b989-f2abce8e9b8b\") " pod="openshift-apiserver/apiserver-76f77b778f-b8pbt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.880557 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4dgk\" (UniqueName: \"kubernetes.io/projected/12eba345-07f7-4472-8a45-d0bd87317b0a-kube-api-access-f4dgk\") pod \"cluster-samples-operator-665b6dd947-nsksg\" (UID: \"12eba345-07f7-4472-8a45-d0bd87317b0a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nsksg" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.880577 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d4005d2-d765-4a8e-9b85-8c49d8238995-client-ca\") pod \"controller-manager-879f6c89f-btl28\" (UID: \"3d4005d2-d765-4a8e-9b85-8c49d8238995\") " pod="openshift-controller-manager/controller-manager-879f6c89f-btl28" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.880600 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a441724f-e9ab-46fe-8bb3-9564e6b87e7e-client-ca\") pod \"route-controller-manager-6576b87f9c-g2gvf\" (UID: \"a441724f-e9ab-46fe-8bb3-9564e6b87e7e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g2gvf" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.880647 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5560b0e-f2b1-469b-b989-f2abce8e9b8b-config\") pod \"apiserver-76f77b778f-b8pbt\" (UID: \"d5560b0e-f2b1-469b-b989-f2abce8e9b8b\") " pod="openshift-apiserver/apiserver-76f77b778f-b8pbt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.880671 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/af0f5f19-3c55-45a1-9a8f-66e55fd46683-auth-proxy-config\") pod \"machine-approver-56656f9798-hjdrd\" (UID: \"af0f5f19-3c55-45a1-9a8f-66e55fd46683\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hjdrd" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.880824 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c1e5bc60-f7d5-436d-9298-3099adb6bc0a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-stnj2\" (UID: \"c1e5bc60-f7d5-436d-9298-3099adb6bc0a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-stnj2" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.880855 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqlg2\" (UniqueName: \"kubernetes.io/projected/af0f5f19-3c55-45a1-9a8f-66e55fd46683-kube-api-access-kqlg2\") pod \"machine-approver-56656f9798-hjdrd\" (UID: \"af0f5f19-3c55-45a1-9a8f-66e55fd46683\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hjdrd" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.880873 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1e5bc60-f7d5-436d-9298-3099adb6bc0a-config\") pod \"machine-api-operator-5694c8668f-stnj2\" (UID: \"c1e5bc60-f7d5-436d-9298-3099adb6bc0a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-stnj2" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.880892 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/116808cb-b59f-4f18-8fa1-a383fba544d9-config\") pod \"console-operator-58897d9998-4wnpt\" (UID: \"116808cb-b59f-4f18-8fa1-a383fba544d9\") " pod="openshift-console-operator/console-operator-58897d9998-4wnpt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.880915 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87a6deb5-a486-4485-9e62-e9eb946878ca-serving-cert\") pod \"authentication-operator-69f744f599-jnj5g\" (UID: \"87a6deb5-a486-4485-9e62-e9eb946878ca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jnj5g" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.880947 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f1cb67f7-508e-44b5-9fa7-bb8f811812f8-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-xdxhg\" (UID: \"f1cb67f7-508e-44b5-9fa7-bb8f811812f8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xdxhg" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.880965 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d4005d2-d765-4a8e-9b85-8c49d8238995-serving-cert\") pod \"controller-manager-879f6c89f-btl28\" (UID: \"3d4005d2-d765-4a8e-9b85-8c49d8238995\") " pod="openshift-controller-manager/controller-manager-879f6c89f-btl28" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.880985 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a441724f-e9ab-46fe-8bb3-9564e6b87e7e-serving-cert\") pod \"route-controller-manager-6576b87f9c-g2gvf\" (UID: \"a441724f-e9ab-46fe-8bb3-9564e6b87e7e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g2gvf" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.881004 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b617c42f-c749-41e5-a305-692a4c631656-trusted-ca-bundle\") pod \"console-f9d7485db-xwz2m\" (UID: \"b617c42f-c749-41e5-a305-692a4c631656\") " pod="openshift-console/console-f9d7485db-xwz2m" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.881113 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xp96f\" (UniqueName: \"kubernetes.io/projected/3d4005d2-d765-4a8e-9b85-8c49d8238995-kube-api-access-xp96f\") pod \"controller-manager-879f6c89f-btl28\" (UID: \"3d4005d2-d765-4a8e-9b85-8c49d8238995\") " pod="openshift-controller-manager/controller-manager-879f6c89f-btl28" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.881140 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d5560b0e-f2b1-469b-b989-f2abce8e9b8b-etcd-serving-ca\") pod \"apiserver-76f77b778f-b8pbt\" (UID: \"d5560b0e-f2b1-469b-b989-f2abce8e9b8b\") " pod="openshift-apiserver/apiserver-76f77b778f-b8pbt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.881211 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d5560b0e-f2b1-469b-b989-f2abce8e9b8b-image-import-ca\") pod \"apiserver-76f77b778f-b8pbt\" (UID: \"d5560b0e-f2b1-469b-b989-f2abce8e9b8b\") " pod="openshift-apiserver/apiserver-76f77b778f-b8pbt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.881313 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f1cb67f7-508e-44b5-9fa7-bb8f811812f8-etcd-client\") pod \"apiserver-7bbb656c7d-xdxhg\" (UID: \"f1cb67f7-508e-44b5-9fa7-bb8f811812f8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xdxhg" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.881372 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5f77r\" (UniqueName: \"kubernetes.io/projected/a441724f-e9ab-46fe-8bb3-9564e6b87e7e-kube-api-access-5f77r\") pod \"route-controller-manager-6576b87f9c-g2gvf\" (UID: \"a441724f-e9ab-46fe-8bb3-9564e6b87e7e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g2gvf" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.881637 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/12eba345-07f7-4472-8a45-d0bd87317b0a-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-nsksg\" (UID: \"12eba345-07f7-4472-8a45-d0bd87317b0a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nsksg" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.881851 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rm6vx\" (UniqueName: \"kubernetes.io/projected/87a6deb5-a486-4485-9e62-e9eb946878ca-kube-api-access-rm6vx\") pod \"authentication-operator-69f744f599-jnj5g\" (UID: \"87a6deb5-a486-4485-9e62-e9eb946878ca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jnj5g" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.881896 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/116808cb-b59f-4f18-8fa1-a383fba544d9-trusted-ca\") pod \"console-operator-58897d9998-4wnpt\" (UID: \"116808cb-b59f-4f18-8fa1-a383fba544d9\") " pod="openshift-console-operator/console-operator-58897d9998-4wnpt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.881957 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6a0d00bf-a97f-4ccb-ba05-aba2a220014a-metrics-tls\") pod \"dns-operator-744455d44c-wc6g2\" (UID: \"6a0d00bf-a97f-4ccb-ba05-aba2a220014a\") " pod="openshift-dns-operator/dns-operator-744455d44c-wc6g2" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.881979 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d5560b0e-f2b1-469b-b989-f2abce8e9b8b-audit-dir\") pod \"apiserver-76f77b778f-b8pbt\" (UID: \"d5560b0e-f2b1-469b-b989-f2abce8e9b8b\") " pod="openshift-apiserver/apiserver-76f77b778f-b8pbt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.881995 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22bea415-75b9-4f36-a531-62617ed244c8-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-vtrsq\" (UID: \"22bea415-75b9-4f36-a531-62617ed244c8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vtrsq" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.882017 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77fd0717-6d6c-46cf-a19a-29dce58a7176-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-2n6pb\" (UID: \"77fd0717-6d6c-46cf-a19a-29dce58a7176\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2n6pb" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.882097 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-b8pbt"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.882155 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1cb67f7-508e-44b5-9fa7-bb8f811812f8-serving-cert\") pod \"apiserver-7bbb656c7d-xdxhg\" (UID: \"f1cb67f7-508e-44b5-9fa7-bb8f811812f8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xdxhg" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.882180 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3d4005d2-d765-4a8e-9b85-8c49d8238995-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-btl28\" (UID: \"3d4005d2-d765-4a8e-9b85-8c49d8238995\") " pod="openshift-controller-manager/controller-manager-879f6c89f-btl28" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.882200 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b617c42f-c749-41e5-a305-692a4c631656-console-serving-cert\") pod \"console-f9d7485db-xwz2m\" (UID: \"b617c42f-c749-41e5-a305-692a4c631656\") " pod="openshift-console/console-f9d7485db-xwz2m" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.882235 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af0f5f19-3c55-45a1-9a8f-66e55fd46683-config\") pod \"machine-approver-56656f9798-hjdrd\" (UID: \"af0f5f19-3c55-45a1-9a8f-66e55fd46683\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hjdrd" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.882293 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsw2t\" (UniqueName: \"kubernetes.io/projected/6a0d00bf-a97f-4ccb-ba05-aba2a220014a-kube-api-access-nsw2t\") pod \"dns-operator-744455d44c-wc6g2\" (UID: \"6a0d00bf-a97f-4ccb-ba05-aba2a220014a\") " pod="openshift-dns-operator/dns-operator-744455d44c-wc6g2" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.882347 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d5560b0e-f2b1-469b-b989-f2abce8e9b8b-node-pullsecrets\") pod \"apiserver-76f77b778f-b8pbt\" (UID: \"d5560b0e-f2b1-469b-b989-f2abce8e9b8b\") " pod="openshift-apiserver/apiserver-76f77b778f-b8pbt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.882495 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d5560b0e-f2b1-469b-b989-f2abce8e9b8b-encryption-config\") pod \"apiserver-76f77b778f-b8pbt\" (UID: \"d5560b0e-f2b1-469b-b989-f2abce8e9b8b\") " pod="openshift-apiserver/apiserver-76f77b778f-b8pbt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.882530 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/116808cb-b59f-4f18-8fa1-a383fba544d9-serving-cert\") pod \"console-operator-58897d9998-4wnpt\" (UID: \"116808cb-b59f-4f18-8fa1-a383fba544d9\") " pod="openshift-console-operator/console-operator-58897d9998-4wnpt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.882551 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwv6f\" (UniqueName: \"kubernetes.io/projected/116808cb-b59f-4f18-8fa1-a383fba544d9-kube-api-access-gwv6f\") pod \"console-operator-58897d9998-4wnpt\" (UID: \"116808cb-b59f-4f18-8fa1-a383fba544d9\") " pod="openshift-console-operator/console-operator-58897d9998-4wnpt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.882601 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87a6deb5-a486-4485-9e62-e9eb946878ca-service-ca-bundle\") pod \"authentication-operator-69f744f599-jnj5g\" (UID: \"87a6deb5-a486-4485-9e62-e9eb946878ca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jnj5g" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.882709 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f1cb67f7-508e-44b5-9fa7-bb8f811812f8-encryption-config\") pod \"apiserver-7bbb656c7d-xdxhg\" (UID: \"f1cb67f7-508e-44b5-9fa7-bb8f811812f8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xdxhg" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.886093 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522265-s4986"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.887001 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2n6pb"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.887201 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522265-s4986" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.888187 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-btl28"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.889632 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-k497l"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.890521 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-k497l" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.891700 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-xwz2m"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.891871 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/12eba345-07f7-4472-8a45-d0bd87317b0a-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-nsksg\" (UID: \"12eba345-07f7-4472-8a45-d0bd87317b0a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nsksg" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.892796 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-knm6z"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.893984 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-slqbd"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.895412 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.896149 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-27vkt"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.897116 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xj6dt"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.898630 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-l9kx4"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.899850 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-h2rpb"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.900825 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-87pnl"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.901871 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r7lt2"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.902943 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vtrsq"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.903900 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xdsld"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.904844 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-sgpfh"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.906020 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pc9vj"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.907547 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nsksg"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.908518 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-fps2n"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.909698 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-chdnk"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.910899 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gvzml"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.911827 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522265-s4986"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.913085 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-znglf"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.914108 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7r5vw"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.915242 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.915378 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-5d2h9"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.916209 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-2mlww"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.917432 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-57krf"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.919228 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-s9pl9"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.919635 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-s9pl9" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.920178 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-7dwhv"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.921570 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6nkfm"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.923001 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-57krf" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.923312 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-2ddlx"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.925547 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cvhc2"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.926901 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-k497l"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.928109 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-57krf"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.928584 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-s9pl9"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.936258 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.936708 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-9wgts"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.940400 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-9wgts" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.941232 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-9wgts"] Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.955648 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.975423 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.985020 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spkwh\" (UniqueName: \"kubernetes.io/projected/ec012f89-9c82-4c49-9a7e-892979946444-kube-api-access-spkwh\") pod \"package-server-manager-789f6589d5-27vkt\" (UID: \"ec012f89-9c82-4c49-9a7e-892979946444\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-27vkt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.985083 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/234db162-a495-45d1-8af9-7e2deaa2763c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-h2rpb\" (UID: \"234db162-a495-45d1-8af9-7e2deaa2763c\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2rpb" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.985106 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/234db162-a495-45d1-8af9-7e2deaa2763c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-h2rpb\" (UID: \"234db162-a495-45d1-8af9-7e2deaa2763c\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2rpb" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.985126 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf443d79-e768-4da2-b385-e6b8072cb88e-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-xj6dt\" (UID: \"bf443d79-e768-4da2-b385-e6b8072cb88e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xj6dt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.985149 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5560b0e-f2b1-469b-b989-f2abce8e9b8b-serving-cert\") pod \"apiserver-76f77b778f-b8pbt\" (UID: \"d5560b0e-f2b1-469b-b989-f2abce8e9b8b\") " pod="openshift-apiserver/apiserver-76f77b778f-b8pbt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.985457 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d4005d2-d765-4a8e-9b85-8c49d8238995-client-ca\") pod \"controller-manager-879f6c89f-btl28\" (UID: \"3d4005d2-d765-4a8e-9b85-8c49d8238995\") " pod="openshift-controller-manager/controller-manager-879f6c89f-btl28" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.985494 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/234db162-a495-45d1-8af9-7e2deaa2763c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-h2rpb\" (UID: \"234db162-a495-45d1-8af9-7e2deaa2763c\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2rpb" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.985522 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5560b0e-f2b1-469b-b989-f2abce8e9b8b-config\") pod \"apiserver-76f77b778f-b8pbt\" (UID: \"d5560b0e-f2b1-469b-b989-f2abce8e9b8b\") " pod="openshift-apiserver/apiserver-76f77b778f-b8pbt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.985546 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzcds\" (UniqueName: \"kubernetes.io/projected/f5b88dcb-84d4-4311-8c01-860d17b444eb-kube-api-access-rzcds\") pod \"multus-admission-controller-857f4d67dd-2mlww\" (UID: \"f5b88dcb-84d4-4311-8c01-860d17b444eb\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-2mlww" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.985657 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/af0f5f19-3c55-45a1-9a8f-66e55fd46683-auth-proxy-config\") pod \"machine-approver-56656f9798-hjdrd\" (UID: \"af0f5f19-3c55-45a1-9a8f-66e55fd46683\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hjdrd" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.985687 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/234db162-a495-45d1-8af9-7e2deaa2763c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-h2rpb\" (UID: \"234db162-a495-45d1-8af9-7e2deaa2763c\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2rpb" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.985705 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87a6deb5-a486-4485-9e62-e9eb946878ca-serving-cert\") pod \"authentication-operator-69f744f599-jnj5g\" (UID: \"87a6deb5-a486-4485-9e62-e9eb946878ca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jnj5g" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.985723 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqlg2\" (UniqueName: \"kubernetes.io/projected/af0f5f19-3c55-45a1-9a8f-66e55fd46683-kube-api-access-kqlg2\") pod \"machine-approver-56656f9798-hjdrd\" (UID: \"af0f5f19-3c55-45a1-9a8f-66e55fd46683\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hjdrd" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.985740 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1e5bc60-f7d5-436d-9298-3099adb6bc0a-config\") pod \"machine-api-operator-5694c8668f-stnj2\" (UID: \"c1e5bc60-f7d5-436d-9298-3099adb6bc0a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-stnj2" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.985757 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e90aa002-ad59-43a3-88db-f2e03408f40d-config\") pod \"kube-apiserver-operator-766d6c64bb-pc9vj\" (UID: \"e90aa002-ad59-43a3-88db-f2e03408f40d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pc9vj" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.985792 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a39a9177-9838-434f-a2e0-c8359ff146fe-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6nkfm\" (UID: \"a39a9177-9838-434f-a2e0-c8359ff146fe\") " pod="openshift-marketplace/marketplace-operator-79b997595-6nkfm" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.985809 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9bd21e79-5975-4f5f-976e-94a05e2df000-apiservice-cert\") pod \"packageserver-d55dfcdfc-cvhc2\" (UID: \"9bd21e79-5975-4f5f-976e-94a05e2df000\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cvhc2" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.985834 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a0991bf-b28c-471a-8c28-24b461784fdd-config\") pod \"etcd-operator-b45778765-knm6z\" (UID: \"7a0991bf-b28c-471a-8c28-24b461784fdd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-knm6z" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.985858 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xp96f\" (UniqueName: \"kubernetes.io/projected/3d4005d2-d765-4a8e-9b85-8c49d8238995-kube-api-access-xp96f\") pod \"controller-manager-879f6c89f-btl28\" (UID: \"3d4005d2-d765-4a8e-9b85-8c49d8238995\") " pod="openshift-controller-manager/controller-manager-879f6c89f-btl28" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.985876 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d5560b0e-f2b1-469b-b989-f2abce8e9b8b-etcd-serving-ca\") pod \"apiserver-76f77b778f-b8pbt\" (UID: \"d5560b0e-f2b1-469b-b989-f2abce8e9b8b\") " pod="openshift-apiserver/apiserver-76f77b778f-b8pbt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.985922 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f1cb67f7-508e-44b5-9fa7-bb8f811812f8-etcd-client\") pod \"apiserver-7bbb656c7d-xdxhg\" (UID: \"f1cb67f7-508e-44b5-9fa7-bb8f811812f8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xdxhg" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.985950 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/7998df5b-fb36-4561-870d-80e55c87facd-node-bootstrap-token\") pod \"machine-config-server-lsrj4\" (UID: \"7998df5b-fb36-4561-870d-80e55c87facd\") " pod="openshift-machine-config-operator/machine-config-server-lsrj4" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.985980 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5f77r\" (UniqueName: \"kubernetes.io/projected/a441724f-e9ab-46fe-8bb3-9564e6b87e7e-kube-api-access-5f77r\") pod \"route-controller-manager-6576b87f9c-g2gvf\" (UID: \"a441724f-e9ab-46fe-8bb3-9564e6b87e7e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g2gvf" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.986022 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/116808cb-b59f-4f18-8fa1-a383fba544d9-trusted-ca\") pod \"console-operator-58897d9998-4wnpt\" (UID: \"116808cb-b59f-4f18-8fa1-a383fba544d9\") " pod="openshift-console-operator/console-operator-58897d9998-4wnpt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.986066 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7a0991bf-b28c-471a-8c28-24b461784fdd-etcd-client\") pod \"etcd-operator-b45778765-knm6z\" (UID: \"7a0991bf-b28c-471a-8c28-24b461784fdd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-knm6z" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.986085 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1cb67f7-508e-44b5-9fa7-bb8f811812f8-serving-cert\") pod \"apiserver-7bbb656c7d-xdxhg\" (UID: \"f1cb67f7-508e-44b5-9fa7-bb8f811812f8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xdxhg" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.986107 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/234db162-a495-45d1-8af9-7e2deaa2763c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-h2rpb\" (UID: \"234db162-a495-45d1-8af9-7e2deaa2763c\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2rpb" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.986127 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-444mc\" (UniqueName: \"kubernetes.io/projected/9d9814af-c66a-49fd-a3ca-814e8b0caf48-kube-api-access-444mc\") pod \"router-default-5444994796-dmmfm\" (UID: \"9d9814af-c66a-49fd-a3ca-814e8b0caf48\") " pod="openshift-ingress/router-default-5444994796-dmmfm" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.986144 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/234db162-a495-45d1-8af9-7e2deaa2763c-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-h2rpb\" (UID: \"234db162-a495-45d1-8af9-7e2deaa2763c\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2rpb" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.986160 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpgp6\" (UniqueName: \"kubernetes.io/projected/c251713f-cec1-4ae0-a70b-553ab3b74a5b-kube-api-access-gpgp6\") pod \"openshift-config-operator-7777fb866f-l9kx4\" (UID: \"c251713f-cec1-4ae0-a70b-553ab3b74a5b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l9kx4" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.986177 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njmxm\" (UniqueName: \"kubernetes.io/projected/3e99f9e5-e0f4-4444-8303-f69571809455-kube-api-access-njmxm\") pod \"control-plane-machine-set-operator-78cbb6b69f-slqbd\" (UID: \"3e99f9e5-e0f4-4444-8303-f69571809455\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-slqbd" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.986193 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af0f5f19-3c55-45a1-9a8f-66e55fd46683-config\") pod \"machine-approver-56656f9798-hjdrd\" (UID: \"af0f5f19-3c55-45a1-9a8f-66e55fd46683\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hjdrd" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.986211 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d5560b0e-f2b1-469b-b989-f2abce8e9b8b-encryption-config\") pod \"apiserver-76f77b778f-b8pbt\" (UID: \"d5560b0e-f2b1-469b-b989-f2abce8e9b8b\") " pod="openshift-apiserver/apiserver-76f77b778f-b8pbt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.986227 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b617c42f-c749-41e5-a305-692a4c631656-console-serving-cert\") pod \"console-f9d7485db-xwz2m\" (UID: \"b617c42f-c749-41e5-a305-692a4c631656\") " pod="openshift-console/console-f9d7485db-xwz2m" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.986242 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87a6deb5-a486-4485-9e62-e9eb946878ca-service-ca-bundle\") pod \"authentication-operator-69f744f599-jnj5g\" (UID: \"87a6deb5-a486-4485-9e62-e9eb946878ca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jnj5g" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.986259 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m84s2\" (UniqueName: \"kubernetes.io/projected/04af8f3e-80ec-462f-bbda-a1d7e1ebd37f-kube-api-access-m84s2\") pod \"cluster-image-registry-operator-dc59b4c8b-r7lt2\" (UID: \"04af8f3e-80ec-462f-bbda-a1d7e1ebd37f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r7lt2" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.986275 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/116808cb-b59f-4f18-8fa1-a383fba544d9-serving-cert\") pod \"console-operator-58897d9998-4wnpt\" (UID: \"116808cb-b59f-4f18-8fa1-a383fba544d9\") " pod="openshift-console-operator/console-operator-58897d9998-4wnpt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.986292 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwv6f\" (UniqueName: \"kubernetes.io/projected/116808cb-b59f-4f18-8fa1-a383fba544d9-kube-api-access-gwv6f\") pod \"console-operator-58897d9998-4wnpt\" (UID: \"116808cb-b59f-4f18-8fa1-a383fba544d9\") " pod="openshift-console-operator/console-operator-58897d9998-4wnpt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.986306 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f1cb67f7-508e-44b5-9fa7-bb8f811812f8-encryption-config\") pod \"apiserver-7bbb656c7d-xdxhg\" (UID: \"f1cb67f7-508e-44b5-9fa7-bb8f811812f8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xdxhg" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.986322 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e6036e20-bd82-411b-a2f5-0806db078ac0-profile-collector-cert\") pod \"olm-operator-6b444d44fb-xdsld\" (UID: \"e6036e20-bd82-411b-a2f5-0806db078ac0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xdsld" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.986340 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c46nj\" (UniqueName: \"kubernetes.io/projected/e6036e20-bd82-411b-a2f5-0806db078ac0-kube-api-access-c46nj\") pod \"olm-operator-6b444d44fb-xdsld\" (UID: \"e6036e20-bd82-411b-a2f5-0806db078ac0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xdsld" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.986357 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78qdg\" (UniqueName: \"kubernetes.io/projected/22bea415-75b9-4f36-a531-62617ed244c8-kube-api-access-78qdg\") pod \"kube-storage-version-migrator-operator-b67b599dd-vtrsq\" (UID: \"22bea415-75b9-4f36-a531-62617ed244c8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vtrsq" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.986373 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b617c42f-c749-41e5-a305-692a4c631656-service-ca\") pod \"console-f9d7485db-xwz2m\" (UID: \"b617c42f-c749-41e5-a305-692a4c631656\") " pod="openshift-console/console-f9d7485db-xwz2m" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.986388 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d5560b0e-f2b1-469b-b989-f2abce8e9b8b-etcd-client\") pod \"apiserver-76f77b778f-b8pbt\" (UID: \"d5560b0e-f2b1-469b-b989-f2abce8e9b8b\") " pod="openshift-apiserver/apiserver-76f77b778f-b8pbt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.986404 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d4005d2-d765-4a8e-9b85-8c49d8238995-config\") pod \"controller-manager-879f6c89f-btl28\" (UID: \"3d4005d2-d765-4a8e-9b85-8c49d8238995\") " pod="openshift-controller-manager/controller-manager-879f6c89f-btl28" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.986420 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87a6deb5-a486-4485-9e62-e9eb946878ca-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-jnj5g\" (UID: \"87a6deb5-a486-4485-9e62-e9eb946878ca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jnj5g" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.986436 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/234db162-a495-45d1-8af9-7e2deaa2763c-audit-dir\") pod \"oauth-openshift-558db77b4-h2rpb\" (UID: \"234db162-a495-45d1-8af9-7e2deaa2763c\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2rpb" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.986453 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77fd0717-6d6c-46cf-a19a-29dce58a7176-config\") pod \"openshift-apiserver-operator-796bbdcf4f-2n6pb\" (UID: \"77fd0717-6d6c-46cf-a19a-29dce58a7176\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2n6pb" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.986468 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f1cb67f7-508e-44b5-9fa7-bb8f811812f8-audit-policies\") pod \"apiserver-7bbb656c7d-xdxhg\" (UID: \"f1cb67f7-508e-44b5-9fa7-bb8f811812f8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xdxhg" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.986483 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf443d79-e768-4da2-b385-e6b8072cb88e-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-xj6dt\" (UID: \"bf443d79-e768-4da2-b385-e6b8072cb88e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xj6dt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.986483 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5560b0e-f2b1-469b-b989-f2abce8e9b8b-config\") pod \"apiserver-76f77b778f-b8pbt\" (UID: \"d5560b0e-f2b1-469b-b989-f2abce8e9b8b\") " pod="openshift-apiserver/apiserver-76f77b778f-b8pbt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.986499 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c251713f-cec1-4ae0-a70b-553ab3b74a5b-available-featuregates\") pod \"openshift-config-operator-7777fb866f-l9kx4\" (UID: \"c251713f-cec1-4ae0-a70b-553ab3b74a5b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l9kx4" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.986516 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqz5v\" (UniqueName: \"kubernetes.io/projected/d5560b0e-f2b1-469b-b989-f2abce8e9b8b-kube-api-access-pqz5v\") pod \"apiserver-76f77b778f-b8pbt\" (UID: \"d5560b0e-f2b1-469b-b989-f2abce8e9b8b\") " pod="openshift-apiserver/apiserver-76f77b778f-b8pbt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.986532 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm4z9\" (UniqueName: \"kubernetes.io/projected/7998df5b-fb36-4561-870d-80e55c87facd-kube-api-access-wm4z9\") pod \"machine-config-server-lsrj4\" (UID: \"7998df5b-fb36-4561-870d-80e55c87facd\") " pod="openshift-machine-config-operator/machine-config-server-lsrj4" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.986768 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/af0f5f19-3c55-45a1-9a8f-66e55fd46683-auth-proxy-config\") pod \"machine-approver-56656f9798-hjdrd\" (UID: \"af0f5f19-3c55-45a1-9a8f-66e55fd46683\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hjdrd" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.986780 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6w76l\" (UniqueName: \"kubernetes.io/projected/00b43a88-457b-4c5a-ab44-2af8e47b2c2d-kube-api-access-6w76l\") pod \"ingress-operator-5b745b69d9-chdnk\" (UID: \"00b43a88-457b-4c5a-ab44-2af8e47b2c2d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-chdnk" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.986800 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3e99f9e5-e0f4-4444-8303-f69571809455-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-slqbd\" (UID: \"3e99f9e5-e0f4-4444-8303-f69571809455\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-slqbd" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.986819 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d5560b0e-f2b1-469b-b989-f2abce8e9b8b-audit\") pod \"apiserver-76f77b778f-b8pbt\" (UID: \"d5560b0e-f2b1-469b-b989-f2abce8e9b8b\") " pod="openshift-apiserver/apiserver-76f77b778f-b8pbt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.986841 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f1cb67f7-508e-44b5-9fa7-bb8f811812f8-audit-dir\") pod \"apiserver-7bbb656c7d-xdxhg\" (UID: \"f1cb67f7-508e-44b5-9fa7-bb8f811812f8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xdxhg" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.986861 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdptx\" (UniqueName: \"kubernetes.io/projected/10816227-9540-49c4-bd68-82e7810b9e06-kube-api-access-tdptx\") pod \"migrator-59844c95c7-znglf\" (UID: \"10816227-9540-49c4-bd68-82e7810b9e06\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-znglf" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.986891 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87a6deb5-a486-4485-9e62-e9eb946878ca-config\") pod \"authentication-operator-69f744f599-jnj5g\" (UID: \"87a6deb5-a486-4485-9e62-e9eb946878ca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jnj5g" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.986908 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22bea415-75b9-4f36-a531-62617ed244c8-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-vtrsq\" (UID: \"22bea415-75b9-4f36-a531-62617ed244c8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vtrsq" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.986929 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b617c42f-c749-41e5-a305-692a4c631656-console-oauth-config\") pod \"console-f9d7485db-xwz2m\" (UID: \"b617c42f-c749-41e5-a305-692a4c631656\") " pod="openshift-console/console-f9d7485db-xwz2m" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.986950 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzqdl\" (UniqueName: \"kubernetes.io/projected/b617c42f-c749-41e5-a305-692a4c631656-kube-api-access-dzqdl\") pod \"console-f9d7485db-xwz2m\" (UID: \"b617c42f-c749-41e5-a305-692a4c631656\") " pod="openshift-console/console-f9d7485db-xwz2m" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.986973 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/234db162-a495-45d1-8af9-7e2deaa2763c-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-h2rpb\" (UID: \"234db162-a495-45d1-8af9-7e2deaa2763c\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2rpb" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.986994 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f1cb67f7-508e-44b5-9fa7-bb8f811812f8-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-xdxhg\" (UID: \"f1cb67f7-508e-44b5-9fa7-bb8f811812f8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xdxhg" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.987022 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/ec012f89-9c82-4c49-9a7e-892979946444-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-27vkt\" (UID: \"ec012f89-9c82-4c49-9a7e-892979946444\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-27vkt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.987060 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nngdf\" (UniqueName: \"kubernetes.io/projected/f1cb67f7-508e-44b5-9fa7-bb8f811812f8-kube-api-access-nngdf\") pod \"apiserver-7bbb656c7d-xdxhg\" (UID: \"f1cb67f7-508e-44b5-9fa7-bb8f811812f8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xdxhg" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.987077 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/502603f9-5374-4fa7-8398-1f9d931e370f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gvzml\" (UID: \"502603f9-5374-4fa7-8398-1f9d931e370f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gvzml" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.987183 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1e5bc60-f7d5-436d-9298-3099adb6bc0a-config\") pod \"machine-api-operator-5694c8668f-stnj2\" (UID: \"c1e5bc60-f7d5-436d-9298-3099adb6bc0a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-stnj2" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.987330 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d5560b0e-f2b1-469b-b989-f2abce8e9b8b-etcd-serving-ca\") pod \"apiserver-76f77b778f-b8pbt\" (UID: \"d5560b0e-f2b1-469b-b989-f2abce8e9b8b\") " pod="openshift-apiserver/apiserver-76f77b778f-b8pbt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.987568 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d4005d2-d765-4a8e-9b85-8c49d8238995-client-ca\") pod \"controller-manager-879f6c89f-btl28\" (UID: \"3d4005d2-d765-4a8e-9b85-8c49d8238995\") " pod="openshift-controller-manager/controller-manager-879f6c89f-btl28" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.987667 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5560b0e-f2b1-469b-b989-f2abce8e9b8b-serving-cert\") pod \"apiserver-76f77b778f-b8pbt\" (UID: \"d5560b0e-f2b1-469b-b989-f2abce8e9b8b\") " pod="openshift-apiserver/apiserver-76f77b778f-b8pbt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.987932 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87a6deb5-a486-4485-9e62-e9eb946878ca-serving-cert\") pod \"authentication-operator-69f744f599-jnj5g\" (UID: \"87a6deb5-a486-4485-9e62-e9eb946878ca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jnj5g" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.987989 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af0f5f19-3c55-45a1-9a8f-66e55fd46683-config\") pod \"machine-approver-56656f9798-hjdrd\" (UID: \"af0f5f19-3c55-45a1-9a8f-66e55fd46683\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hjdrd" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.988182 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/116808cb-b59f-4f18-8fa1-a383fba544d9-trusted-ca\") pod \"console-operator-58897d9998-4wnpt\" (UID: \"116808cb-b59f-4f18-8fa1-a383fba544d9\") " pod="openshift-console-operator/console-operator-58897d9998-4wnpt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.988818 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87a6deb5-a486-4485-9e62-e9eb946878ca-service-ca-bundle\") pod \"authentication-operator-69f744f599-jnj5g\" (UID: \"87a6deb5-a486-4485-9e62-e9eb946878ca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jnj5g" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.988940 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77fd0717-6d6c-46cf-a19a-29dce58a7176-config\") pod \"openshift-apiserver-operator-796bbdcf4f-2n6pb\" (UID: \"77fd0717-6d6c-46cf-a19a-29dce58a7176\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2n6pb" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.988943 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f1cb67f7-508e-44b5-9fa7-bb8f811812f8-etcd-client\") pod \"apiserver-7bbb656c7d-xdxhg\" (UID: \"f1cb67f7-508e-44b5-9fa7-bb8f811812f8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xdxhg" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.989546 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b617c42f-c749-41e5-a305-692a4c631656-service-ca\") pod \"console-f9d7485db-xwz2m\" (UID: \"b617c42f-c749-41e5-a305-692a4c631656\") " pod="openshift-console/console-f9d7485db-xwz2m" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.989651 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f1cb67f7-508e-44b5-9fa7-bb8f811812f8-audit-dir\") pod \"apiserver-7bbb656c7d-xdxhg\" (UID: \"f1cb67f7-508e-44b5-9fa7-bb8f811812f8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xdxhg" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.989877 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/116808cb-b59f-4f18-8fa1-a383fba544d9-serving-cert\") pod \"console-operator-58897d9998-4wnpt\" (UID: \"116808cb-b59f-4f18-8fa1-a383fba544d9\") " pod="openshift-console-operator/console-operator-58897d9998-4wnpt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.989932 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f1cb67f7-508e-44b5-9fa7-bb8f811812f8-encryption-config\") pod \"apiserver-7bbb656c7d-xdxhg\" (UID: \"f1cb67f7-508e-44b5-9fa7-bb8f811812f8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xdxhg" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.990064 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f1cb67f7-508e-44b5-9fa7-bb8f811812f8-audit-policies\") pod \"apiserver-7bbb656c7d-xdxhg\" (UID: \"f1cb67f7-508e-44b5-9fa7-bb8f811812f8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xdxhg" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.990163 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d5560b0e-f2b1-469b-b989-f2abce8e9b8b-audit\") pod \"apiserver-76f77b778f-b8pbt\" (UID: \"d5560b0e-f2b1-469b-b989-f2abce8e9b8b\") " pod="openshift-apiserver/apiserver-76f77b778f-b8pbt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.990457 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b617c42f-c749-41e5-a305-692a4c631656-console-serving-cert\") pod \"console-f9d7485db-xwz2m\" (UID: \"b617c42f-c749-41e5-a305-692a4c631656\") " pod="openshift-console/console-f9d7485db-xwz2m" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.990521 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f1cb67f7-508e-44b5-9fa7-bb8f811812f8-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-xdxhg\" (UID: \"f1cb67f7-508e-44b5-9fa7-bb8f811812f8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xdxhg" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.990575 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a441724f-e9ab-46fe-8bb3-9564e6b87e7e-client-ca\") pod \"route-controller-manager-6576b87f9c-g2gvf\" (UID: \"a441724f-e9ab-46fe-8bb3-9564e6b87e7e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g2gvf" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.990603 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/234db162-a495-45d1-8af9-7e2deaa2763c-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-h2rpb\" (UID: \"234db162-a495-45d1-8af9-7e2deaa2763c\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2rpb" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.990672 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c1e5bc60-f7d5-436d-9298-3099adb6bc0a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-stnj2\" (UID: \"c1e5bc60-f7d5-436d-9298-3099adb6bc0a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-stnj2" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.990709 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf443d79-e768-4da2-b385-e6b8072cb88e-config\") pod \"kube-controller-manager-operator-78b949d7b-xj6dt\" (UID: \"bf443d79-e768-4da2-b385-e6b8072cb88e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xj6dt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.990727 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/116808cb-b59f-4f18-8fa1-a383fba544d9-config\") pod \"console-operator-58897d9998-4wnpt\" (UID: \"116808cb-b59f-4f18-8fa1-a383fba544d9\") " pod="openshift-console-operator/console-operator-58897d9998-4wnpt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.990745 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9d9814af-c66a-49fd-a3ca-814e8b0caf48-metrics-certs\") pod \"router-default-5444994796-dmmfm\" (UID: \"9d9814af-c66a-49fd-a3ca-814e8b0caf48\") " pod="openshift-ingress/router-default-5444994796-dmmfm" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.990785 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a0991bf-b28c-471a-8c28-24b461784fdd-serving-cert\") pod \"etcd-operator-b45778765-knm6z\" (UID: \"7a0991bf-b28c-471a-8c28-24b461784fdd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-knm6z" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.990805 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/9d9814af-c66a-49fd-a3ca-814e8b0caf48-default-certificate\") pod \"router-default-5444994796-dmmfm\" (UID: \"9d9814af-c66a-49fd-a3ca-814e8b0caf48\") " pod="openshift-ingress/router-default-5444994796-dmmfm" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.990822 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04af8f3e-80ec-462f-bbda-a1d7e1ebd37f-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-r7lt2\" (UID: \"04af8f3e-80ec-462f-bbda-a1d7e1ebd37f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r7lt2" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.990839 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f1cb67f7-508e-44b5-9fa7-bb8f811812f8-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-xdxhg\" (UID: \"f1cb67f7-508e-44b5-9fa7-bb8f811812f8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xdxhg" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.990879 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d4005d2-d765-4a8e-9b85-8c49d8238995-serving-cert\") pod \"controller-manager-879f6c89f-btl28\" (UID: \"3d4005d2-d765-4a8e-9b85-8c49d8238995\") " pod="openshift-controller-manager/controller-manager-879f6c89f-btl28" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.990896 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/9d9814af-c66a-49fd-a3ca-814e8b0caf48-stats-auth\") pod \"router-default-5444994796-dmmfm\" (UID: \"9d9814af-c66a-49fd-a3ca-814e8b0caf48\") " pod="openshift-ingress/router-default-5444994796-dmmfm" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.990912 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e90aa002-ad59-43a3-88db-f2e03408f40d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-pc9vj\" (UID: \"e90aa002-ad59-43a3-88db-f2e03408f40d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pc9vj" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.990928 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/04af8f3e-80ec-462f-bbda-a1d7e1ebd37f-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-r7lt2\" (UID: \"04af8f3e-80ec-462f-bbda-a1d7e1ebd37f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r7lt2" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.990946 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkdwb\" (UniqueName: \"kubernetes.io/projected/ec31a3ca-f7c7-4af9-b9ea-57f9eb4a8815-kube-api-access-bkdwb\") pod \"openshift-controller-manager-operator-756b6f6bc6-sgpfh\" (UID: \"ec31a3ca-f7c7-4af9-b9ea-57f9eb4a8815\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-sgpfh" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.990970 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a441724f-e9ab-46fe-8bb3-9564e6b87e7e-serving-cert\") pod \"route-controller-manager-6576b87f9c-g2gvf\" (UID: \"a441724f-e9ab-46fe-8bb3-9564e6b87e7e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g2gvf" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.990988 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/234db162-a495-45d1-8af9-7e2deaa2763c-audit-policies\") pod \"oauth-openshift-558db77b4-h2rpb\" (UID: \"234db162-a495-45d1-8af9-7e2deaa2763c\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2rpb" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.991003 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/00b43a88-457b-4c5a-ab44-2af8e47b2c2d-bound-sa-token\") pod \"ingress-operator-5b745b69d9-chdnk\" (UID: \"00b43a88-457b-4c5a-ab44-2af8e47b2c2d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-chdnk" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.991073 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b617c42f-c749-41e5-a305-692a4c631656-trusted-ca-bundle\") pod \"console-f9d7485db-xwz2m\" (UID: \"b617c42f-c749-41e5-a305-692a4c631656\") " pod="openshift-console/console-f9d7485db-xwz2m" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.991090 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d5560b0e-f2b1-469b-b989-f2abce8e9b8b-image-import-ca\") pod \"apiserver-76f77b778f-b8pbt\" (UID: \"d5560b0e-f2b1-469b-b989-f2abce8e9b8b\") " pod="openshift-apiserver/apiserver-76f77b778f-b8pbt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.991107 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qcm5\" (UniqueName: \"kubernetes.io/projected/c91cc697-a95c-4c07-8750-878937f50446-kube-api-access-8qcm5\") pod \"downloads-7954f5f757-fps2n\" (UID: \"c91cc697-a95c-4c07-8750-878937f50446\") " pod="openshift-console/downloads-7954f5f757-fps2n" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.991126 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec31a3ca-f7c7-4af9-b9ea-57f9eb4a8815-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-sgpfh\" (UID: \"ec31a3ca-f7c7-4af9-b9ea-57f9eb4a8815\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-sgpfh" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.991141 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/7998df5b-fb36-4561-870d-80e55c87facd-certs\") pod \"machine-config-server-lsrj4\" (UID: \"7998df5b-fb36-4561-870d-80e55c87facd\") " pod="openshift-machine-config-operator/machine-config-server-lsrj4" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.991157 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87a6deb5-a486-4485-9e62-e9eb946878ca-config\") pod \"authentication-operator-69f744f599-jnj5g\" (UID: \"87a6deb5-a486-4485-9e62-e9eb946878ca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jnj5g" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.991160 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rm6vx\" (UniqueName: \"kubernetes.io/projected/87a6deb5-a486-4485-9e62-e9eb946878ca-kube-api-access-rm6vx\") pod \"authentication-operator-69f744f599-jnj5g\" (UID: \"87a6deb5-a486-4485-9e62-e9eb946878ca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jnj5g" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.991253 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a441724f-e9ab-46fe-8bb3-9564e6b87e7e-client-ca\") pod \"route-controller-manager-6576b87f9c-g2gvf\" (UID: \"a441724f-e9ab-46fe-8bb3-9564e6b87e7e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g2gvf" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.991204 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9bd21e79-5975-4f5f-976e-94a05e2df000-webhook-cert\") pod \"packageserver-d55dfcdfc-cvhc2\" (UID: \"9bd21e79-5975-4f5f-976e-94a05e2df000\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cvhc2" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.991347 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c251713f-cec1-4ae0-a70b-553ab3b74a5b-serving-cert\") pod \"openshift-config-operator-7777fb866f-l9kx4\" (UID: \"c251713f-cec1-4ae0-a70b-553ab3b74a5b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l9kx4" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.991412 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6a0d00bf-a97f-4ccb-ba05-aba2a220014a-metrics-tls\") pod \"dns-operator-744455d44c-wc6g2\" (UID: \"6a0d00bf-a97f-4ccb-ba05-aba2a220014a\") " pod="openshift-dns-operator/dns-operator-744455d44c-wc6g2" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.991471 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d5560b0e-f2b1-469b-b989-f2abce8e9b8b-audit-dir\") pod \"apiserver-76f77b778f-b8pbt\" (UID: \"d5560b0e-f2b1-469b-b989-f2abce8e9b8b\") " pod="openshift-apiserver/apiserver-76f77b778f-b8pbt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.991491 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhlb9\" (UniqueName: \"kubernetes.io/projected/9bd21e79-5975-4f5f-976e-94a05e2df000-kube-api-access-bhlb9\") pod \"packageserver-d55dfcdfc-cvhc2\" (UID: \"9bd21e79-5975-4f5f-976e-94a05e2df000\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cvhc2" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.991524 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77fd0717-6d6c-46cf-a19a-29dce58a7176-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-2n6pb\" (UID: \"77fd0717-6d6c-46cf-a19a-29dce58a7176\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2n6pb" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.991541 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22bea415-75b9-4f36-a531-62617ed244c8-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-vtrsq\" (UID: \"22bea415-75b9-4f36-a531-62617ed244c8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vtrsq" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.991559 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/502603f9-5374-4fa7-8398-1f9d931e370f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gvzml\" (UID: \"502603f9-5374-4fa7-8398-1f9d931e370f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gvzml" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.991594 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87a6deb5-a486-4485-9e62-e9eb946878ca-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-jnj5g\" (UID: \"87a6deb5-a486-4485-9e62-e9eb946878ca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jnj5g" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.991598 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d4005d2-d765-4a8e-9b85-8c49d8238995-config\") pod \"controller-manager-879f6c89f-btl28\" (UID: \"3d4005d2-d765-4a8e-9b85-8c49d8238995\") " pod="openshift-controller-manager/controller-manager-879f6c89f-btl28" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.991693 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f1cb67f7-508e-44b5-9fa7-bb8f811812f8-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-xdxhg\" (UID: \"f1cb67f7-508e-44b5-9fa7-bb8f811812f8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xdxhg" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.992154 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d5560b0e-f2b1-469b-b989-f2abce8e9b8b-audit-dir\") pod \"apiserver-76f77b778f-b8pbt\" (UID: \"d5560b0e-f2b1-469b-b989-f2abce8e9b8b\") " pod="openshift-apiserver/apiserver-76f77b778f-b8pbt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.992300 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/116808cb-b59f-4f18-8fa1-a383fba544d9-config\") pod \"console-operator-58897d9998-4wnpt\" (UID: \"116808cb-b59f-4f18-8fa1-a383fba544d9\") " pod="openshift-console-operator/console-operator-58897d9998-4wnpt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.992388 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/502603f9-5374-4fa7-8398-1f9d931e370f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gvzml\" (UID: \"502603f9-5374-4fa7-8398-1f9d931e370f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gvzml" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.992408 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/234db162-a495-45d1-8af9-7e2deaa2763c-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-h2rpb\" (UID: \"234db162-a495-45d1-8af9-7e2deaa2763c\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2rpb" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.992429 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3d4005d2-d765-4a8e-9b85-8c49d8238995-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-btl28\" (UID: \"3d4005d2-d765-4a8e-9b85-8c49d8238995\") " pod="openshift-controller-manager/controller-manager-879f6c89f-btl28" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.992448 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d5560b0e-f2b1-469b-b989-f2abce8e9b8b-node-pullsecrets\") pod \"apiserver-76f77b778f-b8pbt\" (UID: \"d5560b0e-f2b1-469b-b989-f2abce8e9b8b\") " pod="openshift-apiserver/apiserver-76f77b778f-b8pbt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.992472 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/7a0991bf-b28c-471a-8c28-24b461784fdd-etcd-service-ca\") pod \"etcd-operator-b45778765-knm6z\" (UID: \"7a0991bf-b28c-471a-8c28-24b461784fdd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-knm6z" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.992493 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsw2t\" (UniqueName: \"kubernetes.io/projected/6a0d00bf-a97f-4ccb-ba05-aba2a220014a-kube-api-access-nsw2t\") pod \"dns-operator-744455d44c-wc6g2\" (UID: \"6a0d00bf-a97f-4ccb-ba05-aba2a220014a\") " pod="openshift-dns-operator/dns-operator-744455d44c-wc6g2" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.992557 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e90aa002-ad59-43a3-88db-f2e03408f40d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-pc9vj\" (UID: \"e90aa002-ad59-43a3-88db-f2e03408f40d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pc9vj" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.992575 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/00b43a88-457b-4c5a-ab44-2af8e47b2c2d-trusted-ca\") pod \"ingress-operator-5b745b69d9-chdnk\" (UID: \"00b43a88-457b-4c5a-ab44-2af8e47b2c2d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-chdnk" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.992606 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b617c42f-c749-41e5-a305-692a4c631656-oauth-serving-cert\") pod \"console-f9d7485db-xwz2m\" (UID: \"b617c42f-c749-41e5-a305-692a4c631656\") " pod="openshift-console/console-f9d7485db-xwz2m" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.992623 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a39a9177-9838-434f-a2e0-c8359ff146fe-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6nkfm\" (UID: \"a39a9177-9838-434f-a2e0-c8359ff146fe\") " pod="openshift-marketplace/marketplace-operator-79b997595-6nkfm" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.992641 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvprr\" (UniqueName: \"kubernetes.io/projected/234db162-a495-45d1-8af9-7e2deaa2763c-kube-api-access-pvprr\") pod \"oauth-openshift-558db77b4-h2rpb\" (UID: \"234db162-a495-45d1-8af9-7e2deaa2763c\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2rpb" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.993153 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b617c42f-c749-41e5-a305-692a4c631656-trusted-ca-bundle\") pod \"console-f9d7485db-xwz2m\" (UID: \"b617c42f-c749-41e5-a305-692a4c631656\") " pod="openshift-console/console-f9d7485db-xwz2m" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.993426 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3d4005d2-d765-4a8e-9b85-8c49d8238995-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-btl28\" (UID: \"3d4005d2-d765-4a8e-9b85-8c49d8238995\") " pod="openshift-controller-manager/controller-manager-879f6c89f-btl28" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.993468 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d5560b0e-f2b1-469b-b989-f2abce8e9b8b-node-pullsecrets\") pod \"apiserver-76f77b778f-b8pbt\" (UID: \"d5560b0e-f2b1-469b-b989-f2abce8e9b8b\") " pod="openshift-apiserver/apiserver-76f77b778f-b8pbt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.993772 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d5560b0e-f2b1-469b-b989-f2abce8e9b8b-image-import-ca\") pod \"apiserver-76f77b778f-b8pbt\" (UID: \"d5560b0e-f2b1-469b-b989-f2abce8e9b8b\") " pod="openshift-apiserver/apiserver-76f77b778f-b8pbt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.993838 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5560b0e-f2b1-469b-b989-f2abce8e9b8b-trusted-ca-bundle\") pod \"apiserver-76f77b778f-b8pbt\" (UID: \"d5560b0e-f2b1-469b-b989-f2abce8e9b8b\") " pod="openshift-apiserver/apiserver-76f77b778f-b8pbt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.993864 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75dw7\" (UniqueName: \"kubernetes.io/projected/77fd0717-6d6c-46cf-a19a-29dce58a7176-kube-api-access-75dw7\") pod \"openshift-apiserver-operator-796bbdcf4f-2n6pb\" (UID: \"77fd0717-6d6c-46cf-a19a-29dce58a7176\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2n6pb" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.993913 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e6036e20-bd82-411b-a2f5-0806db078ac0-srv-cert\") pod \"olm-operator-6b444d44fb-xdsld\" (UID: \"e6036e20-bd82-411b-a2f5-0806db078ac0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xdsld" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.993932 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec31a3ca-f7c7-4af9-b9ea-57f9eb4a8815-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-sgpfh\" (UID: \"ec31a3ca-f7c7-4af9-b9ea-57f9eb4a8815\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-sgpfh" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.993949 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvcr9\" (UniqueName: \"kubernetes.io/projected/7a0991bf-b28c-471a-8c28-24b461784fdd-kube-api-access-pvcr9\") pod \"etcd-operator-b45778765-knm6z\" (UID: \"7a0991bf-b28c-471a-8c28-24b461784fdd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-knm6z" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.994119 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b617c42f-c749-41e5-a305-692a4c631656-oauth-serving-cert\") pod \"console-f9d7485db-xwz2m\" (UID: \"b617c42f-c749-41e5-a305-692a4c631656\") " pod="openshift-console/console-f9d7485db-xwz2m" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.994442 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/9bd21e79-5975-4f5f-976e-94a05e2df000-tmpfs\") pod \"packageserver-d55dfcdfc-cvhc2\" (UID: \"9bd21e79-5975-4f5f-976e-94a05e2df000\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cvhc2" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.994469 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b617c42f-c749-41e5-a305-692a4c631656-console-config\") pod \"console-f9d7485db-xwz2m\" (UID: \"b617c42f-c749-41e5-a305-692a4c631656\") " pod="openshift-console/console-f9d7485db-xwz2m" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.994488 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/7a0991bf-b28c-471a-8c28-24b461784fdd-etcd-ca\") pod \"etcd-operator-b45778765-knm6z\" (UID: \"7a0991bf-b28c-471a-8c28-24b461784fdd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-knm6z" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.995419 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b617c42f-c749-41e5-a305-692a4c631656-console-config\") pod \"console-f9d7485db-xwz2m\" (UID: \"b617c42f-c749-41e5-a305-692a4c631656\") " pod="openshift-console/console-f9d7485db-xwz2m" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.995516 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c1e5bc60-f7d5-436d-9298-3099adb6bc0a-images\") pod \"machine-api-operator-5694c8668f-stnj2\" (UID: \"c1e5bc60-f7d5-436d-9298-3099adb6bc0a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-stnj2" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.995525 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5560b0e-f2b1-469b-b989-f2abce8e9b8b-trusted-ca-bundle\") pod \"apiserver-76f77b778f-b8pbt\" (UID: \"d5560b0e-f2b1-469b-b989-f2abce8e9b8b\") " pod="openshift-apiserver/apiserver-76f77b778f-b8pbt" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.996025 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc9xj\" (UniqueName: \"kubernetes.io/projected/a39a9177-9838-434f-a2e0-c8359ff146fe-kube-api-access-bc9xj\") pod \"marketplace-operator-79b997595-6nkfm\" (UID: \"a39a9177-9838-434f-a2e0-c8359ff146fe\") " pod="openshift-marketplace/marketplace-operator-79b997595-6nkfm" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.996104 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f5b88dcb-84d4-4311-8c01-860d17b444eb-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-2mlww\" (UID: \"f5b88dcb-84d4-4311-8c01-860d17b444eb\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-2mlww" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.996297 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/234db162-a495-45d1-8af9-7e2deaa2763c-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-h2rpb\" (UID: \"234db162-a495-45d1-8af9-7e2deaa2763c\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2rpb" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.996338 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/00b43a88-457b-4c5a-ab44-2af8e47b2c2d-metrics-tls\") pod \"ingress-operator-5b745b69d9-chdnk\" (UID: \"00b43a88-457b-4c5a-ab44-2af8e47b2c2d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-chdnk" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.996403 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/af0f5f19-3c55-45a1-9a8f-66e55fd46683-machine-approver-tls\") pod \"machine-approver-56656f9798-hjdrd\" (UID: \"af0f5f19-3c55-45a1-9a8f-66e55fd46683\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hjdrd" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.996440 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a441724f-e9ab-46fe-8bb3-9564e6b87e7e-config\") pod \"route-controller-manager-6576b87f9c-g2gvf\" (UID: \"a441724f-e9ab-46fe-8bb3-9564e6b87e7e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g2gvf" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.996576 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c1e5bc60-f7d5-436d-9298-3099adb6bc0a-images\") pod \"machine-api-operator-5694c8668f-stnj2\" (UID: \"c1e5bc60-f7d5-436d-9298-3099adb6bc0a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-stnj2" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.996768 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d9814af-c66a-49fd-a3ca-814e8b0caf48-service-ca-bundle\") pod \"router-default-5444994796-dmmfm\" (UID: \"9d9814af-c66a-49fd-a3ca-814e8b0caf48\") " pod="openshift-ingress/router-default-5444994796-dmmfm" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.996827 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/04af8f3e-80ec-462f-bbda-a1d7e1ebd37f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-r7lt2\" (UID: \"04af8f3e-80ec-462f-bbda-a1d7e1ebd37f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r7lt2" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.997081 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-js4sk\" (UniqueName: \"kubernetes.io/projected/c1e5bc60-f7d5-436d-9298-3099adb6bc0a-kube-api-access-js4sk\") pod \"machine-api-operator-5694c8668f-stnj2\" (UID: \"c1e5bc60-f7d5-436d-9298-3099adb6bc0a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-stnj2" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.997255 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/234db162-a495-45d1-8af9-7e2deaa2763c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-h2rpb\" (UID: \"234db162-a495-45d1-8af9-7e2deaa2763c\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2rpb" Feb 17 13:47:29 crc kubenswrapper[4833]: I0217 13:47:29.997887 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a441724f-e9ab-46fe-8bb3-9564e6b87e7e-config\") pod \"route-controller-manager-6576b87f9c-g2gvf\" (UID: \"a441724f-e9ab-46fe-8bb3-9564e6b87e7e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g2gvf" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.002460 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d5560b0e-f2b1-469b-b989-f2abce8e9b8b-encryption-config\") pod \"apiserver-76f77b778f-b8pbt\" (UID: \"d5560b0e-f2b1-469b-b989-f2abce8e9b8b\") " pod="openshift-apiserver/apiserver-76f77b778f-b8pbt" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.004755 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d5560b0e-f2b1-469b-b989-f2abce8e9b8b-etcd-client\") pod \"apiserver-76f77b778f-b8pbt\" (UID: \"d5560b0e-f2b1-469b-b989-f2abce8e9b8b\") " pod="openshift-apiserver/apiserver-76f77b778f-b8pbt" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.005861 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6a0d00bf-a97f-4ccb-ba05-aba2a220014a-metrics-tls\") pod \"dns-operator-744455d44c-wc6g2\" (UID: \"6a0d00bf-a97f-4ccb-ba05-aba2a220014a\") " pod="openshift-dns-operator/dns-operator-744455d44c-wc6g2" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.006085 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/af0f5f19-3c55-45a1-9a8f-66e55fd46683-machine-approver-tls\") pod \"machine-approver-56656f9798-hjdrd\" (UID: \"af0f5f19-3c55-45a1-9a8f-66e55fd46683\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hjdrd" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.006184 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b617c42f-c749-41e5-a305-692a4c631656-console-oauth-config\") pod \"console-f9d7485db-xwz2m\" (UID: \"b617c42f-c749-41e5-a305-692a4c631656\") " pod="openshift-console/console-f9d7485db-xwz2m" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.007172 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1cb67f7-508e-44b5-9fa7-bb8f811812f8-serving-cert\") pod \"apiserver-7bbb656c7d-xdxhg\" (UID: \"f1cb67f7-508e-44b5-9fa7-bb8f811812f8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xdxhg" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.010745 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d4005d2-d765-4a8e-9b85-8c49d8238995-serving-cert\") pod \"controller-manager-879f6c89f-btl28\" (UID: \"3d4005d2-d765-4a8e-9b85-8c49d8238995\") " pod="openshift-controller-manager/controller-manager-879f6c89f-btl28" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.010777 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a441724f-e9ab-46fe-8bb3-9564e6b87e7e-serving-cert\") pod \"route-controller-manager-6576b87f9c-g2gvf\" (UID: \"a441724f-e9ab-46fe-8bb3-9564e6b87e7e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g2gvf" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.011073 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c1e5bc60-f7d5-436d-9298-3099adb6bc0a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-stnj2\" (UID: \"c1e5bc60-f7d5-436d-9298-3099adb6bc0a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-stnj2" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.011752 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77fd0717-6d6c-46cf-a19a-29dce58a7176-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-2n6pb\" (UID: \"77fd0717-6d6c-46cf-a19a-29dce58a7176\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2n6pb" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.016251 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.036700 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.056393 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.076411 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.098287 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpgp6\" (UniqueName: \"kubernetes.io/projected/c251713f-cec1-4ae0-a70b-553ab3b74a5b-kube-api-access-gpgp6\") pod \"openshift-config-operator-7777fb866f-l9kx4\" (UID: \"c251713f-cec1-4ae0-a70b-553ab3b74a5b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l9kx4" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.098334 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njmxm\" (UniqueName: \"kubernetes.io/projected/3e99f9e5-e0f4-4444-8303-f69571809455-kube-api-access-njmxm\") pod \"control-plane-machine-set-operator-78cbb6b69f-slqbd\" (UID: \"3e99f9e5-e0f4-4444-8303-f69571809455\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-slqbd" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.098362 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-444mc\" (UniqueName: \"kubernetes.io/projected/9d9814af-c66a-49fd-a3ca-814e8b0caf48-kube-api-access-444mc\") pod \"router-default-5444994796-dmmfm\" (UID: \"9d9814af-c66a-49fd-a3ca-814e8b0caf48\") " pod="openshift-ingress/router-default-5444994796-dmmfm" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.098393 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/234db162-a495-45d1-8af9-7e2deaa2763c-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-h2rpb\" (UID: \"234db162-a495-45d1-8af9-7e2deaa2763c\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2rpb" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.098441 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m84s2\" (UniqueName: \"kubernetes.io/projected/04af8f3e-80ec-462f-bbda-a1d7e1ebd37f-kube-api-access-m84s2\") pod \"cluster-image-registry-operator-dc59b4c8b-r7lt2\" (UID: \"04af8f3e-80ec-462f-bbda-a1d7e1ebd37f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r7lt2" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.098471 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e6036e20-bd82-411b-a2f5-0806db078ac0-profile-collector-cert\") pod \"olm-operator-6b444d44fb-xdsld\" (UID: \"e6036e20-bd82-411b-a2f5-0806db078ac0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xdsld" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.098502 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c46nj\" (UniqueName: \"kubernetes.io/projected/e6036e20-bd82-411b-a2f5-0806db078ac0-kube-api-access-c46nj\") pod \"olm-operator-6b444d44fb-xdsld\" (UID: \"e6036e20-bd82-411b-a2f5-0806db078ac0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xdsld" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.098548 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/234db162-a495-45d1-8af9-7e2deaa2763c-audit-dir\") pod \"oauth-openshift-558db77b4-h2rpb\" (UID: \"234db162-a495-45d1-8af9-7e2deaa2763c\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2rpb" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.098582 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf443d79-e768-4da2-b385-e6b8072cb88e-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-xj6dt\" (UID: \"bf443d79-e768-4da2-b385-e6b8072cb88e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xj6dt" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.098615 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c251713f-cec1-4ae0-a70b-553ab3b74a5b-available-featuregates\") pod \"openshift-config-operator-7777fb866f-l9kx4\" (UID: \"c251713f-cec1-4ae0-a70b-553ab3b74a5b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l9kx4" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.098651 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/234db162-a495-45d1-8af9-7e2deaa2763c-audit-dir\") pod \"oauth-openshift-558db77b4-h2rpb\" (UID: \"234db162-a495-45d1-8af9-7e2deaa2763c\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2rpb" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.098664 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wm4z9\" (UniqueName: \"kubernetes.io/projected/7998df5b-fb36-4561-870d-80e55c87facd-kube-api-access-wm4z9\") pod \"machine-config-server-lsrj4\" (UID: \"7998df5b-fb36-4561-870d-80e55c87facd\") " pod="openshift-machine-config-operator/machine-config-server-lsrj4" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.098727 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6w76l\" (UniqueName: \"kubernetes.io/projected/00b43a88-457b-4c5a-ab44-2af8e47b2c2d-kube-api-access-6w76l\") pod \"ingress-operator-5b745b69d9-chdnk\" (UID: \"00b43a88-457b-4c5a-ab44-2af8e47b2c2d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-chdnk" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.098766 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3e99f9e5-e0f4-4444-8303-f69571809455-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-slqbd\" (UID: \"3e99f9e5-e0f4-4444-8303-f69571809455\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-slqbd" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.098825 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdptx\" (UniqueName: \"kubernetes.io/projected/10816227-9540-49c4-bd68-82e7810b9e06-kube-api-access-tdptx\") pod \"migrator-59844c95c7-znglf\" (UID: \"10816227-9540-49c4-bd68-82e7810b9e06\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-znglf" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.098873 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/234db162-a495-45d1-8af9-7e2deaa2763c-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-h2rpb\" (UID: \"234db162-a495-45d1-8af9-7e2deaa2763c\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2rpb" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.098922 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/502603f9-5374-4fa7-8398-1f9d931e370f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gvzml\" (UID: \"502603f9-5374-4fa7-8398-1f9d931e370f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gvzml" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.098960 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/ec012f89-9c82-4c49-9a7e-892979946444-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-27vkt\" (UID: \"ec012f89-9c82-4c49-9a7e-892979946444\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-27vkt" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.099022 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/234db162-a495-45d1-8af9-7e2deaa2763c-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-h2rpb\" (UID: \"234db162-a495-45d1-8af9-7e2deaa2763c\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2rpb" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.099092 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf443d79-e768-4da2-b385-e6b8072cb88e-config\") pod \"kube-controller-manager-operator-78b949d7b-xj6dt\" (UID: \"bf443d79-e768-4da2-b385-e6b8072cb88e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xj6dt" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.099126 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9d9814af-c66a-49fd-a3ca-814e8b0caf48-metrics-certs\") pod \"router-default-5444994796-dmmfm\" (UID: \"9d9814af-c66a-49fd-a3ca-814e8b0caf48\") " pod="openshift-ingress/router-default-5444994796-dmmfm" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.099158 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a0991bf-b28c-471a-8c28-24b461784fdd-serving-cert\") pod \"etcd-operator-b45778765-knm6z\" (UID: \"7a0991bf-b28c-471a-8c28-24b461784fdd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-knm6z" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.099190 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/9d9814af-c66a-49fd-a3ca-814e8b0caf48-default-certificate\") pod \"router-default-5444994796-dmmfm\" (UID: \"9d9814af-c66a-49fd-a3ca-814e8b0caf48\") " pod="openshift-ingress/router-default-5444994796-dmmfm" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.099232 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/9d9814af-c66a-49fd-a3ca-814e8b0caf48-stats-auth\") pod \"router-default-5444994796-dmmfm\" (UID: \"9d9814af-c66a-49fd-a3ca-814e8b0caf48\") " pod="openshift-ingress/router-default-5444994796-dmmfm" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.099266 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04af8f3e-80ec-462f-bbda-a1d7e1ebd37f-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-r7lt2\" (UID: \"04af8f3e-80ec-462f-bbda-a1d7e1ebd37f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r7lt2" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.099299 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e90aa002-ad59-43a3-88db-f2e03408f40d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-pc9vj\" (UID: \"e90aa002-ad59-43a3-88db-f2e03408f40d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pc9vj" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.099333 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/04af8f3e-80ec-462f-bbda-a1d7e1ebd37f-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-r7lt2\" (UID: \"04af8f3e-80ec-462f-bbda-a1d7e1ebd37f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r7lt2" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.099371 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkdwb\" (UniqueName: \"kubernetes.io/projected/ec31a3ca-f7c7-4af9-b9ea-57f9eb4a8815-kube-api-access-bkdwb\") pod \"openshift-controller-manager-operator-756b6f6bc6-sgpfh\" (UID: \"ec31a3ca-f7c7-4af9-b9ea-57f9eb4a8815\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-sgpfh" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.099405 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/00b43a88-457b-4c5a-ab44-2af8e47b2c2d-bound-sa-token\") pod \"ingress-operator-5b745b69d9-chdnk\" (UID: \"00b43a88-457b-4c5a-ab44-2af8e47b2c2d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-chdnk" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.099443 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/234db162-a495-45d1-8af9-7e2deaa2763c-audit-policies\") pod \"oauth-openshift-558db77b4-h2rpb\" (UID: \"234db162-a495-45d1-8af9-7e2deaa2763c\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2rpb" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.099495 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qcm5\" (UniqueName: \"kubernetes.io/projected/c91cc697-a95c-4c07-8750-878937f50446-kube-api-access-8qcm5\") pod \"downloads-7954f5f757-fps2n\" (UID: \"c91cc697-a95c-4c07-8750-878937f50446\") " pod="openshift-console/downloads-7954f5f757-fps2n" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.099546 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec31a3ca-f7c7-4af9-b9ea-57f9eb4a8815-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-sgpfh\" (UID: \"ec31a3ca-f7c7-4af9-b9ea-57f9eb4a8815\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-sgpfh" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.099580 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c251713f-cec1-4ae0-a70b-553ab3b74a5b-available-featuregates\") pod \"openshift-config-operator-7777fb866f-l9kx4\" (UID: \"c251713f-cec1-4ae0-a70b-553ab3b74a5b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l9kx4" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.099582 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/7998df5b-fb36-4561-870d-80e55c87facd-certs\") pod \"machine-config-server-lsrj4\" (UID: \"7998df5b-fb36-4561-870d-80e55c87facd\") " pod="openshift-machine-config-operator/machine-config-server-lsrj4" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.099878 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9bd21e79-5975-4f5f-976e-94a05e2df000-webhook-cert\") pod \"packageserver-d55dfcdfc-cvhc2\" (UID: \"9bd21e79-5975-4f5f-976e-94a05e2df000\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cvhc2" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.099918 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c251713f-cec1-4ae0-a70b-553ab3b74a5b-serving-cert\") pod \"openshift-config-operator-7777fb866f-l9kx4\" (UID: \"c251713f-cec1-4ae0-a70b-553ab3b74a5b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l9kx4" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.099948 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhlb9\" (UniqueName: \"kubernetes.io/projected/9bd21e79-5975-4f5f-976e-94a05e2df000-kube-api-access-bhlb9\") pod \"packageserver-d55dfcdfc-cvhc2\" (UID: \"9bd21e79-5975-4f5f-976e-94a05e2df000\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cvhc2" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.099978 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/502603f9-5374-4fa7-8398-1f9d931e370f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gvzml\" (UID: \"502603f9-5374-4fa7-8398-1f9d931e370f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gvzml" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.100001 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/502603f9-5374-4fa7-8398-1f9d931e370f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gvzml\" (UID: \"502603f9-5374-4fa7-8398-1f9d931e370f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gvzml" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.100133 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/234db162-a495-45d1-8af9-7e2deaa2763c-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-h2rpb\" (UID: \"234db162-a495-45d1-8af9-7e2deaa2763c\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2rpb" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.100169 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/7a0991bf-b28c-471a-8c28-24b461784fdd-etcd-service-ca\") pod \"etcd-operator-b45778765-knm6z\" (UID: \"7a0991bf-b28c-471a-8c28-24b461784fdd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-knm6z" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.100192 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e90aa002-ad59-43a3-88db-f2e03408f40d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-pc9vj\" (UID: \"e90aa002-ad59-43a3-88db-f2e03408f40d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pc9vj" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.100216 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/00b43a88-457b-4c5a-ab44-2af8e47b2c2d-trusted-ca\") pod \"ingress-operator-5b745b69d9-chdnk\" (UID: \"00b43a88-457b-4c5a-ab44-2af8e47b2c2d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-chdnk" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.100255 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a39a9177-9838-434f-a2e0-c8359ff146fe-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6nkfm\" (UID: \"a39a9177-9838-434f-a2e0-c8359ff146fe\") " pod="openshift-marketplace/marketplace-operator-79b997595-6nkfm" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.100281 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvprr\" (UniqueName: \"kubernetes.io/projected/234db162-a495-45d1-8af9-7e2deaa2763c-kube-api-access-pvprr\") pod \"oauth-openshift-558db77b4-h2rpb\" (UID: \"234db162-a495-45d1-8af9-7e2deaa2763c\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2rpb" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.100312 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e6036e20-bd82-411b-a2f5-0806db078ac0-srv-cert\") pod \"olm-operator-6b444d44fb-xdsld\" (UID: \"e6036e20-bd82-411b-a2f5-0806db078ac0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xdsld" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.100335 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvcr9\" (UniqueName: \"kubernetes.io/projected/7a0991bf-b28c-471a-8c28-24b461784fdd-kube-api-access-pvcr9\") pod \"etcd-operator-b45778765-knm6z\" (UID: \"7a0991bf-b28c-471a-8c28-24b461784fdd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-knm6z" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.100359 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/9bd21e79-5975-4f5f-976e-94a05e2df000-tmpfs\") pod \"packageserver-d55dfcdfc-cvhc2\" (UID: \"9bd21e79-5975-4f5f-976e-94a05e2df000\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cvhc2" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.100417 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec31a3ca-f7c7-4af9-b9ea-57f9eb4a8815-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-sgpfh\" (UID: \"ec31a3ca-f7c7-4af9-b9ea-57f9eb4a8815\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-sgpfh" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.100445 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04af8f3e-80ec-462f-bbda-a1d7e1ebd37f-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-r7lt2\" (UID: \"04af8f3e-80ec-462f-bbda-a1d7e1ebd37f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r7lt2" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.100451 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/7a0991bf-b28c-471a-8c28-24b461784fdd-etcd-ca\") pod \"etcd-operator-b45778765-knm6z\" (UID: \"7a0991bf-b28c-471a-8c28-24b461784fdd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-knm6z" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.100476 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec31a3ca-f7c7-4af9-b9ea-57f9eb4a8815-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-sgpfh\" (UID: \"ec31a3ca-f7c7-4af9-b9ea-57f9eb4a8815\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-sgpfh" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.100503 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/234db162-a495-45d1-8af9-7e2deaa2763c-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-h2rpb\" (UID: \"234db162-a495-45d1-8af9-7e2deaa2763c\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2rpb" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.100533 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bc9xj\" (UniqueName: \"kubernetes.io/projected/a39a9177-9838-434f-a2e0-c8359ff146fe-kube-api-access-bc9xj\") pod \"marketplace-operator-79b997595-6nkfm\" (UID: \"a39a9177-9838-434f-a2e0-c8359ff146fe\") " pod="openshift-marketplace/marketplace-operator-79b997595-6nkfm" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.100558 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f5b88dcb-84d4-4311-8c01-860d17b444eb-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-2mlww\" (UID: \"f5b88dcb-84d4-4311-8c01-860d17b444eb\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-2mlww" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.100582 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/00b43a88-457b-4c5a-ab44-2af8e47b2c2d-metrics-tls\") pod \"ingress-operator-5b745b69d9-chdnk\" (UID: \"00b43a88-457b-4c5a-ab44-2af8e47b2c2d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-chdnk" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.100612 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d9814af-c66a-49fd-a3ca-814e8b0caf48-service-ca-bundle\") pod \"router-default-5444994796-dmmfm\" (UID: \"9d9814af-c66a-49fd-a3ca-814e8b0caf48\") " pod="openshift-ingress/router-default-5444994796-dmmfm" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.100636 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/04af8f3e-80ec-462f-bbda-a1d7e1ebd37f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-r7lt2\" (UID: \"04af8f3e-80ec-462f-bbda-a1d7e1ebd37f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r7lt2" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.100697 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/234db162-a495-45d1-8af9-7e2deaa2763c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-h2rpb\" (UID: \"234db162-a495-45d1-8af9-7e2deaa2763c\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2rpb" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.100726 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spkwh\" (UniqueName: \"kubernetes.io/projected/ec012f89-9c82-4c49-9a7e-892979946444-kube-api-access-spkwh\") pod \"package-server-manager-789f6589d5-27vkt\" (UID: \"ec012f89-9c82-4c49-9a7e-892979946444\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-27vkt" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.100756 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/234db162-a495-45d1-8af9-7e2deaa2763c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-h2rpb\" (UID: \"234db162-a495-45d1-8af9-7e2deaa2763c\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2rpb" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.100783 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/234db162-a495-45d1-8af9-7e2deaa2763c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-h2rpb\" (UID: \"234db162-a495-45d1-8af9-7e2deaa2763c\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2rpb" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.100806 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf443d79-e768-4da2-b385-e6b8072cb88e-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-xj6dt\" (UID: \"bf443d79-e768-4da2-b385-e6b8072cb88e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xj6dt" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.100845 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/234db162-a495-45d1-8af9-7e2deaa2763c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-h2rpb\" (UID: \"234db162-a495-45d1-8af9-7e2deaa2763c\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2rpb" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.100849 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/7a0991bf-b28c-471a-8c28-24b461784fdd-etcd-service-ca\") pod \"etcd-operator-b45778765-knm6z\" (UID: \"7a0991bf-b28c-471a-8c28-24b461784fdd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-knm6z" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.100872 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/234db162-a495-45d1-8af9-7e2deaa2763c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-h2rpb\" (UID: \"234db162-a495-45d1-8af9-7e2deaa2763c\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2rpb" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.100896 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzcds\" (UniqueName: \"kubernetes.io/projected/f5b88dcb-84d4-4311-8c01-860d17b444eb-kube-api-access-rzcds\") pod \"multus-admission-controller-857f4d67dd-2mlww\" (UID: \"f5b88dcb-84d4-4311-8c01-860d17b444eb\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-2mlww" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.100929 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e90aa002-ad59-43a3-88db-f2e03408f40d-config\") pod \"kube-apiserver-operator-766d6c64bb-pc9vj\" (UID: \"e90aa002-ad59-43a3-88db-f2e03408f40d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pc9vj" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.100966 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a39a9177-9838-434f-a2e0-c8359ff146fe-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6nkfm\" (UID: \"a39a9177-9838-434f-a2e0-c8359ff146fe\") " pod="openshift-marketplace/marketplace-operator-79b997595-6nkfm" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.100992 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9bd21e79-5975-4f5f-976e-94a05e2df000-apiservice-cert\") pod \"packageserver-d55dfcdfc-cvhc2\" (UID: \"9bd21e79-5975-4f5f-976e-94a05e2df000\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cvhc2" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.101023 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a0991bf-b28c-471a-8c28-24b461784fdd-config\") pod \"etcd-operator-b45778765-knm6z\" (UID: \"7a0991bf-b28c-471a-8c28-24b461784fdd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-knm6z" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.101067 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/7998df5b-fb36-4561-870d-80e55c87facd-node-bootstrap-token\") pod \"machine-config-server-lsrj4\" (UID: \"7998df5b-fb36-4561-870d-80e55c87facd\") " pod="openshift-machine-config-operator/machine-config-server-lsrj4" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.101102 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7a0991bf-b28c-471a-8c28-24b461784fdd-etcd-client\") pod \"etcd-operator-b45778765-knm6z\" (UID: \"7a0991bf-b28c-471a-8c28-24b461784fdd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-knm6z" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.101128 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/234db162-a495-45d1-8af9-7e2deaa2763c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-h2rpb\" (UID: \"234db162-a495-45d1-8af9-7e2deaa2763c\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2rpb" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.101386 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/7a0991bf-b28c-471a-8c28-24b461784fdd-etcd-ca\") pod \"etcd-operator-b45778765-knm6z\" (UID: \"7a0991bf-b28c-471a-8c28-24b461784fdd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-knm6z" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.101653 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/9bd21e79-5975-4f5f-976e-94a05e2df000-tmpfs\") pod \"packageserver-d55dfcdfc-cvhc2\" (UID: \"9bd21e79-5975-4f5f-976e-94a05e2df000\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cvhc2" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.103089 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.103493 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/234db162-a495-45d1-8af9-7e2deaa2763c-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-h2rpb\" (UID: \"234db162-a495-45d1-8af9-7e2deaa2763c\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2rpb" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.103778 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/234db162-a495-45d1-8af9-7e2deaa2763c-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-h2rpb\" (UID: \"234db162-a495-45d1-8af9-7e2deaa2763c\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2rpb" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.103893 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/234db162-a495-45d1-8af9-7e2deaa2763c-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-h2rpb\" (UID: \"234db162-a495-45d1-8af9-7e2deaa2763c\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2rpb" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.104174 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/234db162-a495-45d1-8af9-7e2deaa2763c-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-h2rpb\" (UID: \"234db162-a495-45d1-8af9-7e2deaa2763c\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2rpb" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.104333 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/04af8f3e-80ec-462f-bbda-a1d7e1ebd37f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-r7lt2\" (UID: \"04af8f3e-80ec-462f-bbda-a1d7e1ebd37f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r7lt2" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.104442 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a0991bf-b28c-471a-8c28-24b461784fdd-config\") pod \"etcd-operator-b45778765-knm6z\" (UID: \"7a0991bf-b28c-471a-8c28-24b461784fdd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-knm6z" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.104574 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/234db162-a495-45d1-8af9-7e2deaa2763c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-h2rpb\" (UID: \"234db162-a495-45d1-8af9-7e2deaa2763c\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2rpb" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.104673 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/234db162-a495-45d1-8af9-7e2deaa2763c-audit-policies\") pod \"oauth-openshift-558db77b4-h2rpb\" (UID: \"234db162-a495-45d1-8af9-7e2deaa2763c\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2rpb" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.104885 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/234db162-a495-45d1-8af9-7e2deaa2763c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-h2rpb\" (UID: \"234db162-a495-45d1-8af9-7e2deaa2763c\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2rpb" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.105284 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/234db162-a495-45d1-8af9-7e2deaa2763c-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-h2rpb\" (UID: \"234db162-a495-45d1-8af9-7e2deaa2763c\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2rpb" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.105308 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/234db162-a495-45d1-8af9-7e2deaa2763c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-h2rpb\" (UID: \"234db162-a495-45d1-8af9-7e2deaa2763c\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2rpb" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.105754 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/00b43a88-457b-4c5a-ab44-2af8e47b2c2d-metrics-tls\") pod \"ingress-operator-5b745b69d9-chdnk\" (UID: \"00b43a88-457b-4c5a-ab44-2af8e47b2c2d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-chdnk" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.105902 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a0991bf-b28c-471a-8c28-24b461784fdd-serving-cert\") pod \"etcd-operator-b45778765-knm6z\" (UID: \"7a0991bf-b28c-471a-8c28-24b461784fdd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-knm6z" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.107012 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/234db162-a495-45d1-8af9-7e2deaa2763c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-h2rpb\" (UID: \"234db162-a495-45d1-8af9-7e2deaa2763c\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2rpb" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.107030 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/234db162-a495-45d1-8af9-7e2deaa2763c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-h2rpb\" (UID: \"234db162-a495-45d1-8af9-7e2deaa2763c\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2rpb" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.108375 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec31a3ca-f7c7-4af9-b9ea-57f9eb4a8815-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-sgpfh\" (UID: \"ec31a3ca-f7c7-4af9-b9ea-57f9eb4a8815\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-sgpfh" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.109399 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7a0991bf-b28c-471a-8c28-24b461784fdd-etcd-client\") pod \"etcd-operator-b45778765-knm6z\" (UID: \"7a0991bf-b28c-471a-8c28-24b461784fdd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-knm6z" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.109524 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/234db162-a495-45d1-8af9-7e2deaa2763c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-h2rpb\" (UID: \"234db162-a495-45d1-8af9-7e2deaa2763c\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2rpb" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.112450 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/00b43a88-457b-4c5a-ab44-2af8e47b2c2d-trusted-ca\") pod \"ingress-operator-5b745b69d9-chdnk\" (UID: \"00b43a88-457b-4c5a-ab44-2af8e47b2c2d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-chdnk" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.115688 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.135863 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.155790 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.177476 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.196543 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.216745 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.236883 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.247247 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e90aa002-ad59-43a3-88db-f2e03408f40d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-pc9vj\" (UID: \"e90aa002-ad59-43a3-88db-f2e03408f40d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pc9vj" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.257340 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.263880 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e90aa002-ad59-43a3-88db-f2e03408f40d-config\") pod \"kube-apiserver-operator-766d6c64bb-pc9vj\" (UID: \"e90aa002-ad59-43a3-88db-f2e03408f40d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pc9vj" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.275493 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.296361 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.302740 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/ec012f89-9c82-4c49-9a7e-892979946444-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-27vkt\" (UID: \"ec012f89-9c82-4c49-9a7e-892979946444\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-27vkt" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.316378 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.336081 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.356082 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.363755 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3e99f9e5-e0f4-4444-8303-f69571809455-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-slqbd\" (UID: \"3e99f9e5-e0f4-4444-8303-f69571809455\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-slqbd" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.376202 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.396415 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.403883 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf443d79-e768-4da2-b385-e6b8072cb88e-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-xj6dt\" (UID: \"bf443d79-e768-4da2-b385-e6b8072cb88e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xj6dt" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.416707 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.420881 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf443d79-e768-4da2-b385-e6b8072cb88e-config\") pod \"kube-controller-manager-operator-78b949d7b-xj6dt\" (UID: \"bf443d79-e768-4da2-b385-e6b8072cb88e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xj6dt" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.436415 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.456781 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.477026 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.496670 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.516470 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.527780 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22bea415-75b9-4f36-a531-62617ed244c8-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-vtrsq\" (UID: \"22bea415-75b9-4f36-a531-62617ed244c8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vtrsq" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.537502 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.541671 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22bea415-75b9-4f36-a531-62617ed244c8-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-vtrsq\" (UID: \"22bea415-75b9-4f36-a531-62617ed244c8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vtrsq" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.556297 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.576799 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.584208 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/502603f9-5374-4fa7-8398-1f9d931e370f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gvzml\" (UID: \"502603f9-5374-4fa7-8398-1f9d931e370f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gvzml" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.596465 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.616837 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.621476 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/502603f9-5374-4fa7-8398-1f9d931e370f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gvzml\" (UID: \"502603f9-5374-4fa7-8398-1f9d931e370f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gvzml" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.636907 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.644016 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e6036e20-bd82-411b-a2f5-0806db078ac0-profile-collector-cert\") pod \"olm-operator-6b444d44fb-xdsld\" (UID: \"e6036e20-bd82-411b-a2f5-0806db078ac0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xdsld" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.656793 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.677110 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.686879 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e6036e20-bd82-411b-a2f5-0806db078ac0-srv-cert\") pod \"olm-operator-6b444d44fb-xdsld\" (UID: \"e6036e20-bd82-411b-a2f5-0806db078ac0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xdsld" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.697772 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.704934 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9d9814af-c66a-49fd-a3ca-814e8b0caf48-metrics-certs\") pod \"router-default-5444994796-dmmfm\" (UID: \"9d9814af-c66a-49fd-a3ca-814e8b0caf48\") " pod="openshift-ingress/router-default-5444994796-dmmfm" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.717227 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.737248 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.744481 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/9d9814af-c66a-49fd-a3ca-814e8b0caf48-default-certificate\") pod \"router-default-5444994796-dmmfm\" (UID: \"9d9814af-c66a-49fd-a3ca-814e8b0caf48\") " pod="openshift-ingress/router-default-5444994796-dmmfm" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.764362 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.775748 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.776112 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/9d9814af-c66a-49fd-a3ca-814e8b0caf48-stats-auth\") pod \"router-default-5444994796-dmmfm\" (UID: \"9d9814af-c66a-49fd-a3ca-814e8b0caf48\") " pod="openshift-ingress/router-default-5444994796-dmmfm" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.783551 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d9814af-c66a-49fd-a3ca-814e8b0caf48-service-ca-bundle\") pod \"router-default-5444994796-dmmfm\" (UID: \"9d9814af-c66a-49fd-a3ca-814e8b0caf48\") " pod="openshift-ingress/router-default-5444994796-dmmfm" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.797013 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.816992 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.837130 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.854702 4833 request.go:700] Waited for 1.00716225s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-chnjx&limit=500&resourceVersion=0 Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.857081 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.878494 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.898531 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.917427 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.943530 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.954771 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c251713f-cec1-4ae0-a70b-553ab3b74a5b-serving-cert\") pod \"openshift-config-operator-7777fb866f-l9kx4\" (UID: \"c251713f-cec1-4ae0-a70b-553ab3b74a5b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l9kx4" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.970330 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.975718 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 17 13:47:30 crc kubenswrapper[4833]: I0217 13:47:30.996857 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 17 13:47:31 crc kubenswrapper[4833]: I0217 13:47:31.008005 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/7998df5b-fb36-4561-870d-80e55c87facd-node-bootstrap-token\") pod \"machine-config-server-lsrj4\" (UID: \"7998df5b-fb36-4561-870d-80e55c87facd\") " pod="openshift-machine-config-operator/machine-config-server-lsrj4" Feb 17 13:47:31 crc kubenswrapper[4833]: I0217 13:47:31.016221 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 17 13:47:31 crc kubenswrapper[4833]: I0217 13:47:31.024059 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/7998df5b-fb36-4561-870d-80e55c87facd-certs\") pod \"machine-config-server-lsrj4\" (UID: \"7998df5b-fb36-4561-870d-80e55c87facd\") " pod="openshift-machine-config-operator/machine-config-server-lsrj4" Feb 17 13:47:31 crc kubenswrapper[4833]: I0217 13:47:31.037308 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 17 13:47:31 crc kubenswrapper[4833]: I0217 13:47:31.056793 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 17 13:47:31 crc kubenswrapper[4833]: I0217 13:47:31.076088 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 17 13:47:31 crc kubenswrapper[4833]: I0217 13:47:31.088093 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a39a9177-9838-434f-a2e0-c8359ff146fe-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6nkfm\" (UID: \"a39a9177-9838-434f-a2e0-c8359ff146fe\") " pod="openshift-marketplace/marketplace-operator-79b997595-6nkfm" Feb 17 13:47:31 crc kubenswrapper[4833]: E0217 13:47:31.100097 4833 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Feb 17 13:47:31 crc kubenswrapper[4833]: E0217 13:47:31.100229 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9bd21e79-5975-4f5f-976e-94a05e2df000-webhook-cert podName:9bd21e79-5975-4f5f-976e-94a05e2df000 nodeName:}" failed. No retries permitted until 2026-02-17 13:47:31.600191061 +0000 UTC m=+141.235290534 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/9bd21e79-5975-4f5f-976e-94a05e2df000-webhook-cert") pod "packageserver-d55dfcdfc-cvhc2" (UID: "9bd21e79-5975-4f5f-976e-94a05e2df000") : failed to sync secret cache: timed out waiting for the condition Feb 17 13:47:31 crc kubenswrapper[4833]: E0217 13:47:31.100871 4833 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Feb 17 13:47:31 crc kubenswrapper[4833]: E0217 13:47:31.100959 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a39a9177-9838-434f-a2e0-c8359ff146fe-marketplace-trusted-ca podName:a39a9177-9838-434f-a2e0-c8359ff146fe nodeName:}" failed. No retries permitted until 2026-02-17 13:47:31.600937873 +0000 UTC m=+141.236037346 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/a39a9177-9838-434f-a2e0-c8359ff146fe-marketplace-trusted-ca") pod "marketplace-operator-79b997595-6nkfm" (UID: "a39a9177-9838-434f-a2e0-c8359ff146fe") : failed to sync configmap cache: timed out waiting for the condition Feb 17 13:47:31 crc kubenswrapper[4833]: E0217 13:47:31.101954 4833 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Feb 17 13:47:31 crc kubenswrapper[4833]: E0217 13:47:31.102002 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9bd21e79-5975-4f5f-976e-94a05e2df000-apiservice-cert podName:9bd21e79-5975-4f5f-976e-94a05e2df000 nodeName:}" failed. No retries permitted until 2026-02-17 13:47:31.601989784 +0000 UTC m=+141.237089227 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/9bd21e79-5975-4f5f-976e-94a05e2df000-apiservice-cert") pod "packageserver-d55dfcdfc-cvhc2" (UID: "9bd21e79-5975-4f5f-976e-94a05e2df000") : failed to sync secret cache: timed out waiting for the condition Feb 17 13:47:31 crc kubenswrapper[4833]: E0217 13:47:31.102267 4833 secret.go:188] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Feb 17 13:47:31 crc kubenswrapper[4833]: E0217 13:47:31.102308 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f5b88dcb-84d4-4311-8c01-860d17b444eb-webhook-certs podName:f5b88dcb-84d4-4311-8c01-860d17b444eb nodeName:}" failed. No retries permitted until 2026-02-17 13:47:31.602298713 +0000 UTC m=+141.237398156 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/f5b88dcb-84d4-4311-8c01-860d17b444eb-webhook-certs") pod "multus-admission-controller-857f4d67dd-2mlww" (UID: "f5b88dcb-84d4-4311-8c01-860d17b444eb") : failed to sync secret cache: timed out waiting for the condition Feb 17 13:47:31 crc kubenswrapper[4833]: I0217 13:47:31.105432 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 17 13:47:31 crc kubenswrapper[4833]: I0217 13:47:31.116492 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 17 13:47:31 crc kubenswrapper[4833]: I0217 13:47:31.137714 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 17 13:47:31 crc kubenswrapper[4833]: I0217 13:47:31.157800 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 17 13:47:31 crc kubenswrapper[4833]: I0217 13:47:31.175966 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 17 13:47:31 crc kubenswrapper[4833]: I0217 13:47:31.196918 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 17 13:47:31 crc kubenswrapper[4833]: I0217 13:47:31.216296 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 17 13:47:31 crc kubenswrapper[4833]: I0217 13:47:31.237182 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 17 13:47:31 crc kubenswrapper[4833]: I0217 13:47:31.256604 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 17 13:47:31 crc kubenswrapper[4833]: I0217 13:47:31.276224 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 17 13:47:31 crc kubenswrapper[4833]: I0217 13:47:31.297024 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 17 13:47:31 crc kubenswrapper[4833]: I0217 13:47:31.317621 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 17 13:47:31 crc kubenswrapper[4833]: I0217 13:47:31.336981 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 17 13:47:31 crc kubenswrapper[4833]: I0217 13:47:31.357458 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 17 13:47:31 crc kubenswrapper[4833]: I0217 13:47:31.377161 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 17 13:47:31 crc kubenswrapper[4833]: I0217 13:47:31.396900 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 17 13:47:31 crc kubenswrapper[4833]: I0217 13:47:31.446166 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4dgk\" (UniqueName: \"kubernetes.io/projected/12eba345-07f7-4472-8a45-d0bd87317b0a-kube-api-access-f4dgk\") pod \"cluster-samples-operator-665b6dd947-nsksg\" (UID: \"12eba345-07f7-4472-8a45-d0bd87317b0a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nsksg" Feb 17 13:47:31 crc kubenswrapper[4833]: I0217 13:47:31.458471 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 13:47:31 crc kubenswrapper[4833]: I0217 13:47:31.476608 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 13:47:31 crc kubenswrapper[4833]: I0217 13:47:31.498110 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 17 13:47:31 crc kubenswrapper[4833]: I0217 13:47:31.517678 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 17 13:47:31 crc kubenswrapper[4833]: I0217 13:47:31.537546 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 17 13:47:31 crc kubenswrapper[4833]: I0217 13:47:31.557150 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 17 13:47:31 crc kubenswrapper[4833]: I0217 13:47:31.576868 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 17 13:47:31 crc kubenswrapper[4833]: I0217 13:47:31.596961 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 17 13:47:31 crc kubenswrapper[4833]: I0217 13:47:31.617642 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 17 13:47:31 crc kubenswrapper[4833]: I0217 13:47:31.625165 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9bd21e79-5975-4f5f-976e-94a05e2df000-webhook-cert\") pod \"packageserver-d55dfcdfc-cvhc2\" (UID: \"9bd21e79-5975-4f5f-976e-94a05e2df000\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cvhc2" Feb 17 13:47:31 crc kubenswrapper[4833]: I0217 13:47:31.625382 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a39a9177-9838-434f-a2e0-c8359ff146fe-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6nkfm\" (UID: \"a39a9177-9838-434f-a2e0-c8359ff146fe\") " pod="openshift-marketplace/marketplace-operator-79b997595-6nkfm" Feb 17 13:47:31 crc kubenswrapper[4833]: I0217 13:47:31.625513 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f5b88dcb-84d4-4311-8c01-860d17b444eb-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-2mlww\" (UID: \"f5b88dcb-84d4-4311-8c01-860d17b444eb\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-2mlww" Feb 17 13:47:31 crc kubenswrapper[4833]: I0217 13:47:31.625719 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9bd21e79-5975-4f5f-976e-94a05e2df000-apiservice-cert\") pod \"packageserver-d55dfcdfc-cvhc2\" (UID: \"9bd21e79-5975-4f5f-976e-94a05e2df000\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cvhc2" Feb 17 13:47:31 crc kubenswrapper[4833]: I0217 13:47:31.628626 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a39a9177-9838-434f-a2e0-c8359ff146fe-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6nkfm\" (UID: \"a39a9177-9838-434f-a2e0-c8359ff146fe\") " pod="openshift-marketplace/marketplace-operator-79b997595-6nkfm" Feb 17 13:47:31 crc kubenswrapper[4833]: I0217 13:47:31.630709 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9bd21e79-5975-4f5f-976e-94a05e2df000-apiservice-cert\") pod \"packageserver-d55dfcdfc-cvhc2\" (UID: \"9bd21e79-5975-4f5f-976e-94a05e2df000\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cvhc2" Feb 17 13:47:31 crc kubenswrapper[4833]: I0217 13:47:31.631030 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9bd21e79-5975-4f5f-976e-94a05e2df000-webhook-cert\") pod \"packageserver-d55dfcdfc-cvhc2\" (UID: \"9bd21e79-5975-4f5f-976e-94a05e2df000\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cvhc2" Feb 17 13:47:31 crc kubenswrapper[4833]: I0217 13:47:31.631032 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f5b88dcb-84d4-4311-8c01-860d17b444eb-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-2mlww\" (UID: \"f5b88dcb-84d4-4311-8c01-860d17b444eb\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-2mlww" Feb 17 13:47:31 crc kubenswrapper[4833]: I0217 13:47:31.636232 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 17 13:47:31 crc kubenswrapper[4833]: I0217 13:47:31.656270 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 17 13:47:31 crc kubenswrapper[4833]: I0217 13:47:31.677410 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 17 13:47:31 crc kubenswrapper[4833]: I0217 13:47:31.696952 4833 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 17 13:47:31 crc kubenswrapper[4833]: I0217 13:47:31.715773 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 17 13:47:31 crc kubenswrapper[4833]: I0217 13:47:31.734679 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nsksg" Feb 17 13:47:31 crc kubenswrapper[4833]: I0217 13:47:31.736615 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 17 13:47:31 crc kubenswrapper[4833]: I0217 13:47:31.757993 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 17 13:47:31 crc kubenswrapper[4833]: I0217 13:47:31.777084 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 17 13:47:31 crc kubenswrapper[4833]: I0217 13:47:31.826762 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqlg2\" (UniqueName: \"kubernetes.io/projected/af0f5f19-3c55-45a1-9a8f-66e55fd46683-kube-api-access-kqlg2\") pod \"machine-approver-56656f9798-hjdrd\" (UID: \"af0f5f19-3c55-45a1-9a8f-66e55fd46683\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hjdrd" Feb 17 13:47:31 crc kubenswrapper[4833]: I0217 13:47:31.836598 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5f77r\" (UniqueName: \"kubernetes.io/projected/a441724f-e9ab-46fe-8bb3-9564e6b87e7e-kube-api-access-5f77r\") pod \"route-controller-manager-6576b87f9c-g2gvf\" (UID: \"a441724f-e9ab-46fe-8bb3-9564e6b87e7e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g2gvf" Feb 17 13:47:31 crc kubenswrapper[4833]: I0217 13:47:31.855680 4833 request.go:700] Waited for 1.868511605s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/serviceaccounts/console-operator/token Feb 17 13:47:31 crc kubenswrapper[4833]: I0217 13:47:31.858503 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xp96f\" (UniqueName: \"kubernetes.io/projected/3d4005d2-d765-4a8e-9b85-8c49d8238995-kube-api-access-xp96f\") pod \"controller-manager-879f6c89f-btl28\" (UID: \"3d4005d2-d765-4a8e-9b85-8c49d8238995\") " pod="openshift-controller-manager/controller-manager-879f6c89f-btl28" Feb 17 13:47:31 crc kubenswrapper[4833]: I0217 13:47:31.878263 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwv6f\" (UniqueName: \"kubernetes.io/projected/116808cb-b59f-4f18-8fa1-a383fba544d9-kube-api-access-gwv6f\") pod \"console-operator-58897d9998-4wnpt\" (UID: \"116808cb-b59f-4f18-8fa1-a383fba544d9\") " pod="openshift-console-operator/console-operator-58897d9998-4wnpt" Feb 17 13:47:31 crc kubenswrapper[4833]: I0217 13:47:31.883130 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-btl28" Feb 17 13:47:31 crc kubenswrapper[4833]: I0217 13:47:31.897397 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78qdg\" (UniqueName: \"kubernetes.io/projected/22bea415-75b9-4f36-a531-62617ed244c8-kube-api-access-78qdg\") pod \"kube-storage-version-migrator-operator-b67b599dd-vtrsq\" (UID: \"22bea415-75b9-4f36-a531-62617ed244c8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vtrsq" Feb 17 13:47:31 crc kubenswrapper[4833]: I0217 13:47:31.911739 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqz5v\" (UniqueName: \"kubernetes.io/projected/d5560b0e-f2b1-469b-b989-f2abce8e9b8b-kube-api-access-pqz5v\") pod \"apiserver-76f77b778f-b8pbt\" (UID: \"d5560b0e-f2b1-469b-b989-f2abce8e9b8b\") " pod="openshift-apiserver/apiserver-76f77b778f-b8pbt" Feb 17 13:47:31 crc kubenswrapper[4833]: I0217 13:47:31.934120 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzqdl\" (UniqueName: \"kubernetes.io/projected/b617c42f-c749-41e5-a305-692a4c631656-kube-api-access-dzqdl\") pod \"console-f9d7485db-xwz2m\" (UID: \"b617c42f-c749-41e5-a305-692a4c631656\") " pod="openshift-console/console-f9d7485db-xwz2m" Feb 17 13:47:31 crc kubenswrapper[4833]: I0217 13:47:31.948396 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g2gvf" Feb 17 13:47:31 crc kubenswrapper[4833]: I0217 13:47:31.951448 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nngdf\" (UniqueName: \"kubernetes.io/projected/f1cb67f7-508e-44b5-9fa7-bb8f811812f8-kube-api-access-nngdf\") pod \"apiserver-7bbb656c7d-xdxhg\" (UID: \"f1cb67f7-508e-44b5-9fa7-bb8f811812f8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xdxhg" Feb 17 13:47:31 crc kubenswrapper[4833]: I0217 13:47:31.962852 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-4wnpt" Feb 17 13:47:31 crc kubenswrapper[4833]: I0217 13:47:31.974577 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rm6vx\" (UniqueName: \"kubernetes.io/projected/87a6deb5-a486-4485-9e62-e9eb946878ca-kube-api-access-rm6vx\") pod \"authentication-operator-69f744f599-jnj5g\" (UID: \"87a6deb5-a486-4485-9e62-e9eb946878ca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jnj5g" Feb 17 13:47:31 crc kubenswrapper[4833]: I0217 13:47:31.976164 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hjdrd" Feb 17 13:47:31 crc kubenswrapper[4833]: I0217 13:47:31.992025 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsw2t\" (UniqueName: \"kubernetes.io/projected/6a0d00bf-a97f-4ccb-ba05-aba2a220014a-kube-api-access-nsw2t\") pod \"dns-operator-744455d44c-wc6g2\" (UID: \"6a0d00bf-a97f-4ccb-ba05-aba2a220014a\") " pod="openshift-dns-operator/dns-operator-744455d44c-wc6g2" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.010653 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-b8pbt" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.017198 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-wc6g2" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.019545 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75dw7\" (UniqueName: \"kubernetes.io/projected/77fd0717-6d6c-46cf-a19a-29dce58a7176-kube-api-access-75dw7\") pod \"openshift-apiserver-operator-796bbdcf4f-2n6pb\" (UID: \"77fd0717-6d6c-46cf-a19a-29dce58a7176\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2n6pb" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.026589 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-jnj5g" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.036175 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-js4sk\" (UniqueName: \"kubernetes.io/projected/c1e5bc60-f7d5-436d-9298-3099adb6bc0a-kube-api-access-js4sk\") pod \"machine-api-operator-5694c8668f-stnj2\" (UID: \"c1e5bc60-f7d5-436d-9298-3099adb6bc0a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-stnj2" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.045547 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-xwz2m" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.071849 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpgp6\" (UniqueName: \"kubernetes.io/projected/c251713f-cec1-4ae0-a70b-553ab3b74a5b-kube-api-access-gpgp6\") pod \"openshift-config-operator-7777fb866f-l9kx4\" (UID: \"c251713f-cec1-4ae0-a70b-553ab3b74a5b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l9kx4" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.083429 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-btl28"] Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.102142 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njmxm\" (UniqueName: \"kubernetes.io/projected/3e99f9e5-e0f4-4444-8303-f69571809455-kube-api-access-njmxm\") pod \"control-plane-machine-set-operator-78cbb6b69f-slqbd\" (UID: \"3e99f9e5-e0f4-4444-8303-f69571809455\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-slqbd" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.111675 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-444mc\" (UniqueName: \"kubernetes.io/projected/9d9814af-c66a-49fd-a3ca-814e8b0caf48-kube-api-access-444mc\") pod \"router-default-5444994796-dmmfm\" (UID: \"9d9814af-c66a-49fd-a3ca-814e8b0caf48\") " pod="openshift-ingress/router-default-5444994796-dmmfm" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.133627 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m84s2\" (UniqueName: \"kubernetes.io/projected/04af8f3e-80ec-462f-bbda-a1d7e1ebd37f-kube-api-access-m84s2\") pod \"cluster-image-registry-operator-dc59b4c8b-r7lt2\" (UID: \"04af8f3e-80ec-462f-bbda-a1d7e1ebd37f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r7lt2" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.155819 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c46nj\" (UniqueName: \"kubernetes.io/projected/e6036e20-bd82-411b-a2f5-0806db078ac0-kube-api-access-c46nj\") pod \"olm-operator-6b444d44fb-xdsld\" (UID: \"e6036e20-bd82-411b-a2f5-0806db078ac0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xdsld" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.162251 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-slqbd" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.166308 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vtrsq" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.168464 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-4wnpt"] Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.170860 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm4z9\" (UniqueName: \"kubernetes.io/projected/7998df5b-fb36-4561-870d-80e55c87facd-kube-api-access-wm4z9\") pod \"machine-config-server-lsrj4\" (UID: \"7998df5b-fb36-4561-870d-80e55c87facd\") " pod="openshift-machine-config-operator/machine-config-server-lsrj4" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.173554 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nsksg"] Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.178257 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-stnj2" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.183284 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xdsld" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.186206 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-dmmfm" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.190556 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6w76l\" (UniqueName: \"kubernetes.io/projected/00b43a88-457b-4c5a-ab44-2af8e47b2c2d-kube-api-access-6w76l\") pod \"ingress-operator-5b745b69d9-chdnk\" (UID: \"00b43a88-457b-4c5a-ab44-2af8e47b2c2d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-chdnk" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.214922 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xdxhg" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.221517 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdptx\" (UniqueName: \"kubernetes.io/projected/10816227-9540-49c4-bd68-82e7810b9e06-kube-api-access-tdptx\") pod \"migrator-59844c95c7-znglf\" (UID: \"10816227-9540-49c4-bd68-82e7810b9e06\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-znglf" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.231910 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/04af8f3e-80ec-462f-bbda-a1d7e1ebd37f-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-r7lt2\" (UID: \"04af8f3e-80ec-462f-bbda-a1d7e1ebd37f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r7lt2" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.241667 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-g2gvf"] Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.241870 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l9kx4" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.249317 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-lsrj4" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.254383 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qcm5\" (UniqueName: \"kubernetes.io/projected/c91cc697-a95c-4c07-8750-878937f50446-kube-api-access-8qcm5\") pod \"downloads-7954f5f757-fps2n\" (UID: \"c91cc697-a95c-4c07-8750-878937f50446\") " pod="openshift-console/downloads-7954f5f757-fps2n" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.264320 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-wc6g2"] Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.270578 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/00b43a88-457b-4c5a-ab44-2af8e47b2c2d-bound-sa-token\") pod \"ingress-operator-5b745b69d9-chdnk\" (UID: \"00b43a88-457b-4c5a-ab44-2af8e47b2c2d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-chdnk" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.287232 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkdwb\" (UniqueName: \"kubernetes.io/projected/ec31a3ca-f7c7-4af9-b9ea-57f9eb4a8815-kube-api-access-bkdwb\") pod \"openshift-controller-manager-operator-756b6f6bc6-sgpfh\" (UID: \"ec31a3ca-f7c7-4af9-b9ea-57f9eb4a8815\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-sgpfh" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.288282 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-znglf" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.288417 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2n6pb" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.312333 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhlb9\" (UniqueName: \"kubernetes.io/projected/9bd21e79-5975-4f5f-976e-94a05e2df000-kube-api-access-bhlb9\") pod \"packageserver-d55dfcdfc-cvhc2\" (UID: \"9bd21e79-5975-4f5f-976e-94a05e2df000\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cvhc2" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.328487 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/502603f9-5374-4fa7-8398-1f9d931e370f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gvzml\" (UID: \"502603f9-5374-4fa7-8398-1f9d931e370f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gvzml" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.352636 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-sgpfh" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.355323 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvprr\" (UniqueName: \"kubernetes.io/projected/234db162-a495-45d1-8af9-7e2deaa2763c-kube-api-access-pvprr\") pod \"oauth-openshift-558db77b4-h2rpb\" (UID: \"234db162-a495-45d1-8af9-7e2deaa2763c\") " pod="openshift-authentication/oauth-openshift-558db77b4-h2rpb" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.361275 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r7lt2" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.370470 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bc9xj\" (UniqueName: \"kubernetes.io/projected/a39a9177-9838-434f-a2e0-c8359ff146fe-kube-api-access-bc9xj\") pod \"marketplace-operator-79b997595-6nkfm\" (UID: \"a39a9177-9838-434f-a2e0-c8359ff146fe\") " pod="openshift-marketplace/marketplace-operator-79b997595-6nkfm" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.374340 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-h2rpb" Feb 17 13:47:32 crc kubenswrapper[4833]: W0217 13:47:32.381720 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d9814af_c66a_49fd_a3ca_814e8b0caf48.slice/crio-ec09475942da223e57a881f5d1fcb9ed025d9e32f765ac0683b3059d34a17b32 WatchSource:0}: Error finding container ec09475942da223e57a881f5d1fcb9ed025d9e32f765ac0683b3059d34a17b32: Status 404 returned error can't find the container with id ec09475942da223e57a881f5d1fcb9ed025d9e32f765ac0683b3059d34a17b32 Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.382459 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-jnj5g"] Feb 17 13:47:32 crc kubenswrapper[4833]: W0217 13:47:32.383861 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7998df5b_fb36_4561_870d_80e55c87facd.slice/crio-72e0179722e68d5e7a09fe65f848d00711ea38148a79f5febf2cedd1d8681af1 WatchSource:0}: Error finding container 72e0179722e68d5e7a09fe65f848d00711ea38148a79f5febf2cedd1d8681af1: Status 404 returned error can't find the container with id 72e0179722e68d5e7a09fe65f848d00711ea38148a79f5febf2cedd1d8681af1 Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.392245 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-chdnk" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.392644 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e90aa002-ad59-43a3-88db-f2e03408f40d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-pc9vj\" (UID: \"e90aa002-ad59-43a3-88db-f2e03408f40d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pc9vj" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.406881 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-xwz2m"] Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.414509 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spkwh\" (UniqueName: \"kubernetes.io/projected/ec012f89-9c82-4c49-9a7e-892979946444-kube-api-access-spkwh\") pod \"package-server-manager-789f6589d5-27vkt\" (UID: \"ec012f89-9c82-4c49-9a7e-892979946444\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-27vkt" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.438179 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pc9vj" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.439733 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvcr9\" (UniqueName: \"kubernetes.io/projected/7a0991bf-b28c-471a-8c28-24b461784fdd-kube-api-access-pvcr9\") pod \"etcd-operator-b45778765-knm6z\" (UID: \"7a0991bf-b28c-471a-8c28-24b461784fdd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-knm6z" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.446169 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-27vkt" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.448266 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vtrsq"] Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.450333 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzcds\" (UniqueName: \"kubernetes.io/projected/f5b88dcb-84d4-4311-8c01-860d17b444eb-kube-api-access-rzcds\") pod \"multus-admission-controller-857f4d67dd-2mlww\" (UID: \"f5b88dcb-84d4-4311-8c01-860d17b444eb\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-2mlww" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.472143 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf443d79-e768-4da2-b385-e6b8072cb88e-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-xj6dt\" (UID: \"bf443d79-e768-4da2-b385-e6b8072cb88e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xj6dt" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.473383 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gvzml" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.505925 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-fps2n" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.528911 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-b8pbt"] Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.544968 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/16569a9d-7677-455d-87c6-7b2fb504b731-trusted-ca\") pod \"image-registry-697d97f7c8-2ddlx\" (UID: \"16569a9d-7677-455d-87c6-7b2fb504b731\") " pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.545002 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fskw\" (UniqueName: \"kubernetes.io/projected/6ff7bf32-ea2f-46af-89c7-7467a9e48f06-kube-api-access-7fskw\") pod \"machine-config-controller-84d6567774-5d2h9\" (UID: \"6ff7bf32-ea2f-46af-89c7-7467a9e48f06\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5d2h9" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.545070 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2ddlx\" (UID: \"16569a9d-7677-455d-87c6-7b2fb504b731\") " pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.545089 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8662\" (UniqueName: \"kubernetes.io/projected/6d2e32a4-51dc-4406-834c-58392b4727b2-kube-api-access-b8662\") pod \"machine-config-operator-74547568cd-87pnl\" (UID: \"6d2e32a4-51dc-4406-834c-58392b4727b2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-87pnl" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.545105 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6ff7bf32-ea2f-46af-89c7-7467a9e48f06-proxy-tls\") pod \"machine-config-controller-84d6567774-5d2h9\" (UID: \"6ff7bf32-ea2f-46af-89c7-7467a9e48f06\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5d2h9" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.545217 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6d2e32a4-51dc-4406-834c-58392b4727b2-images\") pod \"machine-config-operator-74547568cd-87pnl\" (UID: \"6d2e32a4-51dc-4406-834c-58392b4727b2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-87pnl" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.545256 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/40132c32-a2e7-4a23-a0db-4d5a389e8df1-profile-collector-cert\") pod \"catalog-operator-68c6474976-7r5vw\" (UID: \"40132c32-a2e7-4a23-a0db-4d5a389e8df1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7r5vw" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.545275 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/40132c32-a2e7-4a23-a0db-4d5a389e8df1-srv-cert\") pod \"catalog-operator-68c6474976-7r5vw\" (UID: \"40132c32-a2e7-4a23-a0db-4d5a389e8df1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7r5vw" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.545294 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06a44619-0a61-488d-a3ba-1ac282968063-serving-cert\") pod \"service-ca-operator-777779d784-7dwhv\" (UID: \"06a44619-0a61-488d-a3ba-1ac282968063\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-7dwhv" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.545312 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6d2e32a4-51dc-4406-834c-58392b4727b2-proxy-tls\") pod \"machine-config-operator-74547568cd-87pnl\" (UID: \"6d2e32a4-51dc-4406-834c-58392b4727b2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-87pnl" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.545363 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6d2e32a4-51dc-4406-834c-58392b4727b2-auth-proxy-config\") pod \"machine-config-operator-74547568cd-87pnl\" (UID: \"6d2e32a4-51dc-4406-834c-58392b4727b2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-87pnl" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.545383 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/16569a9d-7677-455d-87c6-7b2fb504b731-installation-pull-secrets\") pod \"image-registry-697d97f7c8-2ddlx\" (UID: \"16569a9d-7677-455d-87c6-7b2fb504b731\") " pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" Feb 17 13:47:32 crc kubenswrapper[4833]: E0217 13:47:32.545436 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:47:33.045420182 +0000 UTC m=+142.680519615 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2ddlx" (UID: "16569a9d-7677-455d-87c6-7b2fb504b731") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.545471 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/16569a9d-7677-455d-87c6-7b2fb504b731-bound-sa-token\") pod \"image-registry-697d97f7c8-2ddlx\" (UID: \"16569a9d-7677-455d-87c6-7b2fb504b731\") " pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.545532 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/16569a9d-7677-455d-87c6-7b2fb504b731-ca-trust-extracted\") pod \"image-registry-697d97f7c8-2ddlx\" (UID: \"16569a9d-7677-455d-87c6-7b2fb504b731\") " pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.545558 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9tv5\" (UniqueName: \"kubernetes.io/projected/40132c32-a2e7-4a23-a0db-4d5a389e8df1-kube-api-access-j9tv5\") pod \"catalog-operator-68c6474976-7r5vw\" (UID: \"40132c32-a2e7-4a23-a0db-4d5a389e8df1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7r5vw" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.545594 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5q8r\" (UniqueName: \"kubernetes.io/projected/06a44619-0a61-488d-a3ba-1ac282968063-kube-api-access-p5q8r\") pod \"service-ca-operator-777779d784-7dwhv\" (UID: \"06a44619-0a61-488d-a3ba-1ac282968063\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-7dwhv" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.545694 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/16569a9d-7677-455d-87c6-7b2fb504b731-registry-tls\") pod \"image-registry-697d97f7c8-2ddlx\" (UID: \"16569a9d-7677-455d-87c6-7b2fb504b731\") " pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.545737 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5xcv\" (UniqueName: \"kubernetes.io/projected/16569a9d-7677-455d-87c6-7b2fb504b731-kube-api-access-v5xcv\") pod \"image-registry-697d97f7c8-2ddlx\" (UID: \"16569a9d-7677-455d-87c6-7b2fb504b731\") " pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.545753 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06a44619-0a61-488d-a3ba-1ac282968063-config\") pod \"service-ca-operator-777779d784-7dwhv\" (UID: \"06a44619-0a61-488d-a3ba-1ac282968063\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-7dwhv" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.545773 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/16569a9d-7677-455d-87c6-7b2fb504b731-registry-certificates\") pod \"image-registry-697d97f7c8-2ddlx\" (UID: \"16569a9d-7677-455d-87c6-7b2fb504b731\") " pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.545885 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6ff7bf32-ea2f-46af-89c7-7467a9e48f06-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-5d2h9\" (UID: \"6ff7bf32-ea2f-46af-89c7-7467a9e48f06\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5d2h9" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.557787 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6nkfm" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.565804 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-2mlww" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.581015 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cvhc2" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.646696 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.646981 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6d2e32a4-51dc-4406-834c-58392b4727b2-proxy-tls\") pod \"machine-config-operator-74547568cd-87pnl\" (UID: \"6d2e32a4-51dc-4406-834c-58392b4727b2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-87pnl" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.647006 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/80d59877-4f3e-4680-9348-3109670fd514-socket-dir\") pod \"csi-hostpathplugin-57krf\" (UID: \"80d59877-4f3e-4680-9348-3109670fd514\") " pod="hostpath-provisioner/csi-hostpathplugin-57krf" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.647066 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6d2e32a4-51dc-4406-834c-58392b4727b2-auth-proxy-config\") pod \"machine-config-operator-74547568cd-87pnl\" (UID: \"6d2e32a4-51dc-4406-834c-58392b4727b2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-87pnl" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.647092 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/16569a9d-7677-455d-87c6-7b2fb504b731-installation-pull-secrets\") pod \"image-registry-697d97f7c8-2ddlx\" (UID: \"16569a9d-7677-455d-87c6-7b2fb504b731\") " pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.647112 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/80d59877-4f3e-4680-9348-3109670fd514-plugins-dir\") pod \"csi-hostpathplugin-57krf\" (UID: \"80d59877-4f3e-4680-9348-3109670fd514\") " pod="hostpath-provisioner/csi-hostpathplugin-57krf" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.647147 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/61861be4-c520-4f21-9a1b-4ad3bb11ba3c-signing-cabundle\") pod \"service-ca-9c57cc56f-k497l\" (UID: \"61861be4-c520-4f21-9a1b-4ad3bb11ba3c\") " pod="openshift-service-ca/service-ca-9c57cc56f-k497l" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.647173 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aa2d375b-2949-4037-a7f2-33fff8c9fde6-secret-volume\") pod \"collect-profiles-29522265-s4986\" (UID: \"aa2d375b-2949-4037-a7f2-33fff8c9fde6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522265-s4986" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.647232 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/80d59877-4f3e-4680-9348-3109670fd514-csi-data-dir\") pod \"csi-hostpathplugin-57krf\" (UID: \"80d59877-4f3e-4680-9348-3109670fd514\") " pod="hostpath-provisioner/csi-hostpathplugin-57krf" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.647264 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvbmz\" (UniqueName: \"kubernetes.io/projected/80d59877-4f3e-4680-9348-3109670fd514-kube-api-access-tvbmz\") pod \"csi-hostpathplugin-57krf\" (UID: \"80d59877-4f3e-4680-9348-3109670fd514\") " pod="hostpath-provisioner/csi-hostpathplugin-57krf" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.647324 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/16569a9d-7677-455d-87c6-7b2fb504b731-bound-sa-token\") pod \"image-registry-697d97f7c8-2ddlx\" (UID: \"16569a9d-7677-455d-87c6-7b2fb504b731\") " pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.647436 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/16569a9d-7677-455d-87c6-7b2fb504b731-ca-trust-extracted\") pod \"image-registry-697d97f7c8-2ddlx\" (UID: \"16569a9d-7677-455d-87c6-7b2fb504b731\") " pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.647459 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9tv5\" (UniqueName: \"kubernetes.io/projected/40132c32-a2e7-4a23-a0db-4d5a389e8df1-kube-api-access-j9tv5\") pod \"catalog-operator-68c6474976-7r5vw\" (UID: \"40132c32-a2e7-4a23-a0db-4d5a389e8df1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7r5vw" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.647510 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5q8r\" (UniqueName: \"kubernetes.io/projected/06a44619-0a61-488d-a3ba-1ac282968063-kube-api-access-p5q8r\") pod \"service-ca-operator-777779d784-7dwhv\" (UID: \"06a44619-0a61-488d-a3ba-1ac282968063\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-7dwhv" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.647569 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa2d375b-2949-4037-a7f2-33fff8c9fde6-config-volume\") pod \"collect-profiles-29522265-s4986\" (UID: \"aa2d375b-2949-4037-a7f2-33fff8c9fde6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522265-s4986" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.647604 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/80d59877-4f3e-4680-9348-3109670fd514-mountpoint-dir\") pod \"csi-hostpathplugin-57krf\" (UID: \"80d59877-4f3e-4680-9348-3109670fd514\") " pod="hostpath-provisioner/csi-hostpathplugin-57krf" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.647620 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8pf7\" (UniqueName: \"kubernetes.io/projected/61861be4-c520-4f21-9a1b-4ad3bb11ba3c-kube-api-access-j8pf7\") pod \"service-ca-9c57cc56f-k497l\" (UID: \"61861be4-c520-4f21-9a1b-4ad3bb11ba3c\") " pod="openshift-service-ca/service-ca-9c57cc56f-k497l" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.647662 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zp5bz\" (UniqueName: \"kubernetes.io/projected/5adc4cc4-935c-4a7b-a68c-442269bc7c5d-kube-api-access-zp5bz\") pod \"dns-default-9wgts\" (UID: \"5adc4cc4-935c-4a7b-a68c-442269bc7c5d\") " pod="openshift-dns/dns-default-9wgts" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.647678 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcnjp\" (UniqueName: \"kubernetes.io/projected/aa2d375b-2949-4037-a7f2-33fff8c9fde6-kube-api-access-lcnjp\") pod \"collect-profiles-29522265-s4986\" (UID: \"aa2d375b-2949-4037-a7f2-33fff8c9fde6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522265-s4986" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.647777 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7f8cj\" (UniqueName: \"kubernetes.io/projected/b8e2b9ad-4101-435c-b61f-c105ceb732eb-kube-api-access-7f8cj\") pod \"ingress-canary-s9pl9\" (UID: \"b8e2b9ad-4101-435c-b61f-c105ceb732eb\") " pod="openshift-ingress-canary/ingress-canary-s9pl9" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.647826 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/16569a9d-7677-455d-87c6-7b2fb504b731-registry-tls\") pod \"image-registry-697d97f7c8-2ddlx\" (UID: \"16569a9d-7677-455d-87c6-7b2fb504b731\") " pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.647843 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b8e2b9ad-4101-435c-b61f-c105ceb732eb-cert\") pod \"ingress-canary-s9pl9\" (UID: \"b8e2b9ad-4101-435c-b61f-c105ceb732eb\") " pod="openshift-ingress-canary/ingress-canary-s9pl9" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.647977 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5xcv\" (UniqueName: \"kubernetes.io/projected/16569a9d-7677-455d-87c6-7b2fb504b731-kube-api-access-v5xcv\") pod \"image-registry-697d97f7c8-2ddlx\" (UID: \"16569a9d-7677-455d-87c6-7b2fb504b731\") " pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.647997 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06a44619-0a61-488d-a3ba-1ac282968063-config\") pod \"service-ca-operator-777779d784-7dwhv\" (UID: \"06a44619-0a61-488d-a3ba-1ac282968063\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-7dwhv" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.648073 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/16569a9d-7677-455d-87c6-7b2fb504b731-registry-certificates\") pod \"image-registry-697d97f7c8-2ddlx\" (UID: \"16569a9d-7677-455d-87c6-7b2fb504b731\") " pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.648095 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6ff7bf32-ea2f-46af-89c7-7467a9e48f06-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-5d2h9\" (UID: \"6ff7bf32-ea2f-46af-89c7-7467a9e48f06\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5d2h9" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.648157 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5adc4cc4-935c-4a7b-a68c-442269bc7c5d-metrics-tls\") pod \"dns-default-9wgts\" (UID: \"5adc4cc4-935c-4a7b-a68c-442269bc7c5d\") " pod="openshift-dns/dns-default-9wgts" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.648246 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5adc4cc4-935c-4a7b-a68c-442269bc7c5d-config-volume\") pod \"dns-default-9wgts\" (UID: \"5adc4cc4-935c-4a7b-a68c-442269bc7c5d\") " pod="openshift-dns/dns-default-9wgts" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.648330 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/16569a9d-7677-455d-87c6-7b2fb504b731-trusted-ca\") pod \"image-registry-697d97f7c8-2ddlx\" (UID: \"16569a9d-7677-455d-87c6-7b2fb504b731\") " pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.648352 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fskw\" (UniqueName: \"kubernetes.io/projected/6ff7bf32-ea2f-46af-89c7-7467a9e48f06-kube-api-access-7fskw\") pod \"machine-config-controller-84d6567774-5d2h9\" (UID: \"6ff7bf32-ea2f-46af-89c7-7467a9e48f06\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5d2h9" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.649391 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8662\" (UniqueName: \"kubernetes.io/projected/6d2e32a4-51dc-4406-834c-58392b4727b2-kube-api-access-b8662\") pod \"machine-config-operator-74547568cd-87pnl\" (UID: \"6d2e32a4-51dc-4406-834c-58392b4727b2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-87pnl" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.649418 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6ff7bf32-ea2f-46af-89c7-7467a9e48f06-proxy-tls\") pod \"machine-config-controller-84d6567774-5d2h9\" (UID: \"6ff7bf32-ea2f-46af-89c7-7467a9e48f06\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5d2h9" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.649550 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/61861be4-c520-4f21-9a1b-4ad3bb11ba3c-signing-key\") pod \"service-ca-9c57cc56f-k497l\" (UID: \"61861be4-c520-4f21-9a1b-4ad3bb11ba3c\") " pod="openshift-service-ca/service-ca-9c57cc56f-k497l" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.649730 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6d2e32a4-51dc-4406-834c-58392b4727b2-images\") pod \"machine-config-operator-74547568cd-87pnl\" (UID: \"6d2e32a4-51dc-4406-834c-58392b4727b2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-87pnl" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.649757 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/40132c32-a2e7-4a23-a0db-4d5a389e8df1-profile-collector-cert\") pod \"catalog-operator-68c6474976-7r5vw\" (UID: \"40132c32-a2e7-4a23-a0db-4d5a389e8df1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7r5vw" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.649778 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/80d59877-4f3e-4680-9348-3109670fd514-registration-dir\") pod \"csi-hostpathplugin-57krf\" (UID: \"80d59877-4f3e-4680-9348-3109670fd514\") " pod="hostpath-provisioner/csi-hostpathplugin-57krf" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.649841 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/40132c32-a2e7-4a23-a0db-4d5a389e8df1-srv-cert\") pod \"catalog-operator-68c6474976-7r5vw\" (UID: \"40132c32-a2e7-4a23-a0db-4d5a389e8df1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7r5vw" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.649864 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06a44619-0a61-488d-a3ba-1ac282968063-serving-cert\") pod \"service-ca-operator-777779d784-7dwhv\" (UID: \"06a44619-0a61-488d-a3ba-1ac282968063\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-7dwhv" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.651995 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/16569a9d-7677-455d-87c6-7b2fb504b731-ca-trust-extracted\") pod \"image-registry-697d97f7c8-2ddlx\" (UID: \"16569a9d-7677-455d-87c6-7b2fb504b731\") " pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.652754 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/16569a9d-7677-455d-87c6-7b2fb504b731-registry-certificates\") pod \"image-registry-697d97f7c8-2ddlx\" (UID: \"16569a9d-7677-455d-87c6-7b2fb504b731\") " pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.654120 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06a44619-0a61-488d-a3ba-1ac282968063-config\") pod \"service-ca-operator-777779d784-7dwhv\" (UID: \"06a44619-0a61-488d-a3ba-1ac282968063\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-7dwhv" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.656426 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6ff7bf32-ea2f-46af-89c7-7467a9e48f06-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-5d2h9\" (UID: \"6ff7bf32-ea2f-46af-89c7-7467a9e48f06\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5d2h9" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.657724 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6d2e32a4-51dc-4406-834c-58392b4727b2-auth-proxy-config\") pod \"machine-config-operator-74547568cd-87pnl\" (UID: \"6d2e32a4-51dc-4406-834c-58392b4727b2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-87pnl" Feb 17 13:47:32 crc kubenswrapper[4833]: E0217 13:47:32.660500 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:47:33.160476423 +0000 UTC m=+142.795575856 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.671698 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-knm6z" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.672667 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/16569a9d-7677-455d-87c6-7b2fb504b731-trusted-ca\") pod \"image-registry-697d97f7c8-2ddlx\" (UID: \"16569a9d-7677-455d-87c6-7b2fb504b731\") " pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.673422 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/16569a9d-7677-455d-87c6-7b2fb504b731-installation-pull-secrets\") pod \"image-registry-697d97f7c8-2ddlx\" (UID: \"16569a9d-7677-455d-87c6-7b2fb504b731\") " pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.673890 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6ff7bf32-ea2f-46af-89c7-7467a9e48f06-proxy-tls\") pod \"machine-config-controller-84d6567774-5d2h9\" (UID: \"6ff7bf32-ea2f-46af-89c7-7467a9e48f06\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5d2h9" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.674008 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06a44619-0a61-488d-a3ba-1ac282968063-serving-cert\") pod \"service-ca-operator-777779d784-7dwhv\" (UID: \"06a44619-0a61-488d-a3ba-1ac282968063\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-7dwhv" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.674327 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6d2e32a4-51dc-4406-834c-58392b4727b2-images\") pod \"machine-config-operator-74547568cd-87pnl\" (UID: \"6d2e32a4-51dc-4406-834c-58392b4727b2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-87pnl" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.677613 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/40132c32-a2e7-4a23-a0db-4d5a389e8df1-profile-collector-cert\") pod \"catalog-operator-68c6474976-7r5vw\" (UID: \"40132c32-a2e7-4a23-a0db-4d5a389e8df1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7r5vw" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.678371 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/40132c32-a2e7-4a23-a0db-4d5a389e8df1-srv-cert\") pod \"catalog-operator-68c6474976-7r5vw\" (UID: \"40132c32-a2e7-4a23-a0db-4d5a389e8df1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7r5vw" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.678772 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6d2e32a4-51dc-4406-834c-58392b4727b2-proxy-tls\") pod \"machine-config-operator-74547568cd-87pnl\" (UID: \"6d2e32a4-51dc-4406-834c-58392b4727b2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-87pnl" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.691161 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/16569a9d-7677-455d-87c6-7b2fb504b731-registry-tls\") pod \"image-registry-697d97f7c8-2ddlx\" (UID: \"16569a9d-7677-455d-87c6-7b2fb504b731\") " pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.692465 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9tv5\" (UniqueName: \"kubernetes.io/projected/40132c32-a2e7-4a23-a0db-4d5a389e8df1-kube-api-access-j9tv5\") pod \"catalog-operator-68c6474976-7r5vw\" (UID: \"40132c32-a2e7-4a23-a0db-4d5a389e8df1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7r5vw" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.715397 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5q8r\" (UniqueName: \"kubernetes.io/projected/06a44619-0a61-488d-a3ba-1ac282968063-kube-api-access-p5q8r\") pod \"service-ca-operator-777779d784-7dwhv\" (UID: \"06a44619-0a61-488d-a3ba-1ac282968063\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-7dwhv" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.736626 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8662\" (UniqueName: \"kubernetes.io/projected/6d2e32a4-51dc-4406-834c-58392b4727b2-kube-api-access-b8662\") pod \"machine-config-operator-74547568cd-87pnl\" (UID: \"6d2e32a4-51dc-4406-834c-58392b4727b2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-87pnl" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.737986 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xdsld"] Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.752242 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/80d59877-4f3e-4680-9348-3109670fd514-registration-dir\") pod \"csi-hostpathplugin-57krf\" (UID: \"80d59877-4f3e-4680-9348-3109670fd514\") " pod="hostpath-provisioner/csi-hostpathplugin-57krf" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.752296 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/80d59877-4f3e-4680-9348-3109670fd514-socket-dir\") pod \"csi-hostpathplugin-57krf\" (UID: \"80d59877-4f3e-4680-9348-3109670fd514\") " pod="hostpath-provisioner/csi-hostpathplugin-57krf" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.752326 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/80d59877-4f3e-4680-9348-3109670fd514-plugins-dir\") pod \"csi-hostpathplugin-57krf\" (UID: \"80d59877-4f3e-4680-9348-3109670fd514\") " pod="hostpath-provisioner/csi-hostpathplugin-57krf" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.752343 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/61861be4-c520-4f21-9a1b-4ad3bb11ba3c-signing-cabundle\") pod \"service-ca-9c57cc56f-k497l\" (UID: \"61861be4-c520-4f21-9a1b-4ad3bb11ba3c\") " pod="openshift-service-ca/service-ca-9c57cc56f-k497l" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.752375 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aa2d375b-2949-4037-a7f2-33fff8c9fde6-secret-volume\") pod \"collect-profiles-29522265-s4986\" (UID: \"aa2d375b-2949-4037-a7f2-33fff8c9fde6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522265-s4986" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.752394 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/80d59877-4f3e-4680-9348-3109670fd514-csi-data-dir\") pod \"csi-hostpathplugin-57krf\" (UID: \"80d59877-4f3e-4680-9348-3109670fd514\") " pod="hostpath-provisioner/csi-hostpathplugin-57krf" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.752411 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvbmz\" (UniqueName: \"kubernetes.io/projected/80d59877-4f3e-4680-9348-3109670fd514-kube-api-access-tvbmz\") pod \"csi-hostpathplugin-57krf\" (UID: \"80d59877-4f3e-4680-9348-3109670fd514\") " pod="hostpath-provisioner/csi-hostpathplugin-57krf" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.752470 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa2d375b-2949-4037-a7f2-33fff8c9fde6-config-volume\") pod \"collect-profiles-29522265-s4986\" (UID: \"aa2d375b-2949-4037-a7f2-33fff8c9fde6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522265-s4986" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.752487 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/80d59877-4f3e-4680-9348-3109670fd514-mountpoint-dir\") pod \"csi-hostpathplugin-57krf\" (UID: \"80d59877-4f3e-4680-9348-3109670fd514\") " pod="hostpath-provisioner/csi-hostpathplugin-57krf" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.752519 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8pf7\" (UniqueName: \"kubernetes.io/projected/61861be4-c520-4f21-9a1b-4ad3bb11ba3c-kube-api-access-j8pf7\") pod \"service-ca-9c57cc56f-k497l\" (UID: \"61861be4-c520-4f21-9a1b-4ad3bb11ba3c\") " pod="openshift-service-ca/service-ca-9c57cc56f-k497l" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.752533 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcnjp\" (UniqueName: \"kubernetes.io/projected/aa2d375b-2949-4037-a7f2-33fff8c9fde6-kube-api-access-lcnjp\") pod \"collect-profiles-29522265-s4986\" (UID: \"aa2d375b-2949-4037-a7f2-33fff8c9fde6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522265-s4986" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.752552 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zp5bz\" (UniqueName: \"kubernetes.io/projected/5adc4cc4-935c-4a7b-a68c-442269bc7c5d-kube-api-access-zp5bz\") pod \"dns-default-9wgts\" (UID: \"5adc4cc4-935c-4a7b-a68c-442269bc7c5d\") " pod="openshift-dns/dns-default-9wgts" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.752568 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7f8cj\" (UniqueName: \"kubernetes.io/projected/b8e2b9ad-4101-435c-b61f-c105ceb732eb-kube-api-access-7f8cj\") pod \"ingress-canary-s9pl9\" (UID: \"b8e2b9ad-4101-435c-b61f-c105ceb732eb\") " pod="openshift-ingress-canary/ingress-canary-s9pl9" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.752601 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b8e2b9ad-4101-435c-b61f-c105ceb732eb-cert\") pod \"ingress-canary-s9pl9\" (UID: \"b8e2b9ad-4101-435c-b61f-c105ceb732eb\") " pod="openshift-ingress-canary/ingress-canary-s9pl9" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.752654 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5adc4cc4-935c-4a7b-a68c-442269bc7c5d-metrics-tls\") pod \"dns-default-9wgts\" (UID: \"5adc4cc4-935c-4a7b-a68c-442269bc7c5d\") " pod="openshift-dns/dns-default-9wgts" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.752692 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5adc4cc4-935c-4a7b-a68c-442269bc7c5d-config-volume\") pod \"dns-default-9wgts\" (UID: \"5adc4cc4-935c-4a7b-a68c-442269bc7c5d\") " pod="openshift-dns/dns-default-9wgts" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.752717 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2ddlx\" (UID: \"16569a9d-7677-455d-87c6-7b2fb504b731\") " pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.752753 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/61861be4-c520-4f21-9a1b-4ad3bb11ba3c-signing-key\") pod \"service-ca-9c57cc56f-k497l\" (UID: \"61861be4-c520-4f21-9a1b-4ad3bb11ba3c\") " pod="openshift-service-ca/service-ca-9c57cc56f-k497l" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.752757 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5xcv\" (UniqueName: \"kubernetes.io/projected/16569a9d-7677-455d-87c6-7b2fb504b731-kube-api-access-v5xcv\") pod \"image-registry-697d97f7c8-2ddlx\" (UID: \"16569a9d-7677-455d-87c6-7b2fb504b731\") " pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.753620 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/80d59877-4f3e-4680-9348-3109670fd514-mountpoint-dir\") pod \"csi-hostpathplugin-57krf\" (UID: \"80d59877-4f3e-4680-9348-3109670fd514\") " pod="hostpath-provisioner/csi-hostpathplugin-57krf" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.753806 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/80d59877-4f3e-4680-9348-3109670fd514-registration-dir\") pod \"csi-hostpathplugin-57krf\" (UID: \"80d59877-4f3e-4680-9348-3109670fd514\") " pod="hostpath-provisioner/csi-hostpathplugin-57krf" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.753853 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/80d59877-4f3e-4680-9348-3109670fd514-socket-dir\") pod \"csi-hostpathplugin-57krf\" (UID: \"80d59877-4f3e-4680-9348-3109670fd514\") " pod="hostpath-provisioner/csi-hostpathplugin-57krf" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.753882 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/80d59877-4f3e-4680-9348-3109670fd514-plugins-dir\") pod \"csi-hostpathplugin-57krf\" (UID: \"80d59877-4f3e-4680-9348-3109670fd514\") " pod="hostpath-provisioner/csi-hostpathplugin-57krf" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.754519 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-xdxhg"] Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.754529 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/61861be4-c520-4f21-9a1b-4ad3bb11ba3c-signing-cabundle\") pod \"service-ca-9c57cc56f-k497l\" (UID: \"61861be4-c520-4f21-9a1b-4ad3bb11ba3c\") " pod="openshift-service-ca/service-ca-9c57cc56f-k497l" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.755956 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/80d59877-4f3e-4680-9348-3109670fd514-csi-data-dir\") pod \"csi-hostpathplugin-57krf\" (UID: \"80d59877-4f3e-4680-9348-3109670fd514\") " pod="hostpath-provisioner/csi-hostpathplugin-57krf" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.756638 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5adc4cc4-935c-4a7b-a68c-442269bc7c5d-config-volume\") pod \"dns-default-9wgts\" (UID: \"5adc4cc4-935c-4a7b-a68c-442269bc7c5d\") " pod="openshift-dns/dns-default-9wgts" Feb 17 13:47:32 crc kubenswrapper[4833]: E0217 13:47:32.757665 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:47:33.257649272 +0000 UTC m=+142.892748735 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2ddlx" (UID: "16569a9d-7677-455d-87c6-7b2fb504b731") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.761405 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xj6dt" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.761702 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa2d375b-2949-4037-a7f2-33fff8c9fde6-config-volume\") pod \"collect-profiles-29522265-s4986\" (UID: \"aa2d375b-2949-4037-a7f2-33fff8c9fde6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522265-s4986" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.764265 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aa2d375b-2949-4037-a7f2-33fff8c9fde6-secret-volume\") pod \"collect-profiles-29522265-s4986\" (UID: \"aa2d375b-2949-4037-a7f2-33fff8c9fde6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522265-s4986" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.764355 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5adc4cc4-935c-4a7b-a68c-442269bc7c5d-metrics-tls\") pod \"dns-default-9wgts\" (UID: \"5adc4cc4-935c-4a7b-a68c-442269bc7c5d\") " pod="openshift-dns/dns-default-9wgts" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.764383 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b8e2b9ad-4101-435c-b61f-c105ceb732eb-cert\") pod \"ingress-canary-s9pl9\" (UID: \"b8e2b9ad-4101-435c-b61f-c105ceb732eb\") " pod="openshift-ingress-canary/ingress-canary-s9pl9" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.764540 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-stnj2"] Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.765735 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/61861be4-c520-4f21-9a1b-4ad3bb11ba3c-signing-key\") pod \"service-ca-9c57cc56f-k497l\" (UID: \"61861be4-c520-4f21-9a1b-4ad3bb11ba3c\") " pod="openshift-service-ca/service-ca-9c57cc56f-k497l" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.796677 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/16569a9d-7677-455d-87c6-7b2fb504b731-bound-sa-token\") pod \"image-registry-697d97f7c8-2ddlx\" (UID: \"16569a9d-7677-455d-87c6-7b2fb504b731\") " pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.798345 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-btl28" event={"ID":"3d4005d2-d765-4a8e-9b85-8c49d8238995","Type":"ContainerStarted","Data":"61177d8a937f1a00422520bd062f3b7e3453f76fc63d5b82ada6cfb5329dd2bf"} Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.798472 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-btl28" event={"ID":"3d4005d2-d765-4a8e-9b85-8c49d8238995","Type":"ContainerStarted","Data":"187041e73037b7a48cfa5b7776cf7077ff1d355df4f480edcb40484aa4bb8b6f"} Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.800560 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-btl28" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.801325 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vtrsq" event={"ID":"22bea415-75b9-4f36-a531-62617ed244c8","Type":"ContainerStarted","Data":"056e984412c32f396ec46537256e25c9a840de983d33b91a9b600ab95b971040"} Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.809821 4833 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-btl28 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.809853 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-btl28" podUID="3d4005d2-d765-4a8e-9b85-8c49d8238995" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.816050 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hjdrd" event={"ID":"af0f5f19-3c55-45a1-9a8f-66e55fd46683","Type":"ContainerStarted","Data":"79a3de19ed8e7fc70090b84421ef755ecdda5513d74a9728ac2b9940d5d1ef51"} Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.816094 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hjdrd" event={"ID":"af0f5f19-3c55-45a1-9a8f-66e55fd46683","Type":"ContainerStarted","Data":"480dd84ad4dc2b16cf7d5ec1c02f6995b3f42d14acf321a43a717ecaf178b0ee"} Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.828880 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-b8pbt" event={"ID":"d5560b0e-f2b1-469b-b989-f2abce8e9b8b","Type":"ContainerStarted","Data":"44d04920cf373b2887309edfd6dc2c8d17e66643b7d20717da02e3c24c11c426"} Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.831423 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-lsrj4" event={"ID":"7998df5b-fb36-4561-870d-80e55c87facd","Type":"ContainerStarted","Data":"72e0179722e68d5e7a09fe65f848d00711ea38148a79f5febf2cedd1d8681af1"} Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.833648 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7f8cj\" (UniqueName: \"kubernetes.io/projected/b8e2b9ad-4101-435c-b61f-c105ceb732eb-kube-api-access-7f8cj\") pod \"ingress-canary-s9pl9\" (UID: \"b8e2b9ad-4101-435c-b61f-c105ceb732eb\") " pod="openshift-ingress-canary/ingress-canary-s9pl9" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.834092 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fskw\" (UniqueName: \"kubernetes.io/projected/6ff7bf32-ea2f-46af-89c7-7467a9e48f06-kube-api-access-7fskw\") pod \"machine-config-controller-84d6567774-5d2h9\" (UID: \"6ff7bf32-ea2f-46af-89c7-7467a9e48f06\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5d2h9" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.836119 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7r5vw" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.839672 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-dmmfm" event={"ID":"9d9814af-c66a-49fd-a3ca-814e8b0caf48","Type":"ContainerStarted","Data":"ec09475942da223e57a881f5d1fcb9ed025d9e32f765ac0683b3059d34a17b32"} Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.845552 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-xwz2m" event={"ID":"b617c42f-c749-41e5-a305-692a4c631656","Type":"ContainerStarted","Data":"ebd1a6d9f48e5fec33b85b8c830eeaaf522672e5f57c55d9a1d1b0144823535c"} Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.847941 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-slqbd"] Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.850172 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g2gvf" event={"ID":"a441724f-e9ab-46fe-8bb3-9564e6b87e7e","Type":"ContainerStarted","Data":"0ce9d3bea10b330fff1bfc2e790c8e27d9ae40842b136f4592ab6140e30c71ed"} Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.850210 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g2gvf" event={"ID":"a441724f-e9ab-46fe-8bb3-9564e6b87e7e","Type":"ContainerStarted","Data":"9114b7cc66bacb852d46e88f2a93447caa749bf138691b43bf887cd0210b8f7a"} Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.850493 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g2gvf" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.851384 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-jnj5g" event={"ID":"87a6deb5-a486-4485-9e62-e9eb946878ca","Type":"ContainerStarted","Data":"35bf50de1220bceb9509335a75e85c70fe91eedf27acc37741657efd9cb7234f"} Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.852346 4833 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-g2gvf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.852384 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g2gvf" podUID="a441724f-e9ab-46fe-8bb3-9564e6b87e7e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.853119 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:47:32 crc kubenswrapper[4833]: E0217 13:47:32.853411 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:47:33.353395058 +0000 UTC m=+142.988494491 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.853694 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-4wnpt" event={"ID":"116808cb-b59f-4f18-8fa1-a383fba544d9","Type":"ContainerStarted","Data":"ab1298dfabe82a21ccfe8e424862914bb0055f6c8b641d81073ad2b178ec0191"} Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.853720 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-4wnpt" event={"ID":"116808cb-b59f-4f18-8fa1-a383fba544d9","Type":"ContainerStarted","Data":"791adbedb987d42f6e456c8f08c6bb2bdcbc903f85851ae63b48c34aea57ba7e"} Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.854114 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-4wnpt" Feb 17 13:47:32 crc kubenswrapper[4833]: E0217 13:47:32.854144 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:47:33.35413049 +0000 UTC m=+142.989229923 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2ddlx" (UID: "16569a9d-7677-455d-87c6-7b2fb504b731") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.853566 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2ddlx\" (UID: \"16569a9d-7677-455d-87c6-7b2fb504b731\") " pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.855848 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-wc6g2" event={"ID":"6a0d00bf-a97f-4ccb-ba05-aba2a220014a","Type":"ContainerStarted","Data":"33dee6e067ef21dd7e9a8c972a4d72850382e2bc503d92467ccf54cf48620601"} Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.860129 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-l9kx4"] Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.860335 4833 patch_prober.go:28] interesting pod/console-operator-58897d9998-4wnpt container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/readyz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.860374 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-4wnpt" podUID="116808cb-b59f-4f18-8fa1-a383fba544d9" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/readyz\": dial tcp 10.217.0.8:8443: connect: connection refused" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.860511 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nsksg" event={"ID":"12eba345-07f7-4472-8a45-d0bd87317b0a","Type":"ContainerStarted","Data":"da78e933f627f8322f41413ad581a34894bb3a3c4846aac3098b81ada06967b6"} Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.869544 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8pf7\" (UniqueName: \"kubernetes.io/projected/61861be4-c520-4f21-9a1b-4ad3bb11ba3c-kube-api-access-j8pf7\") pod \"service-ca-9c57cc56f-k497l\" (UID: \"61861be4-c520-4f21-9a1b-4ad3bb11ba3c\") " pod="openshift-service-ca/service-ca-9c57cc56f-k497l" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.871060 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcnjp\" (UniqueName: \"kubernetes.io/projected/aa2d375b-2949-4037-a7f2-33fff8c9fde6-kube-api-access-lcnjp\") pod \"collect-profiles-29522265-s4986\" (UID: \"aa2d375b-2949-4037-a7f2-33fff8c9fde6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522265-s4986" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.873299 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-7dwhv" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.890391 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zp5bz\" (UniqueName: \"kubernetes.io/projected/5adc4cc4-935c-4a7b-a68c-442269bc7c5d-kube-api-access-zp5bz\") pod \"dns-default-9wgts\" (UID: \"5adc4cc4-935c-4a7b-a68c-442269bc7c5d\") " pod="openshift-dns/dns-default-9wgts" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.899699 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-znglf"] Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.899847 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522265-s4986" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.911608 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-k497l" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.915879 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvbmz\" (UniqueName: \"kubernetes.io/projected/80d59877-4f3e-4680-9348-3109670fd514-kube-api-access-tvbmz\") pod \"csi-hostpathplugin-57krf\" (UID: \"80d59877-4f3e-4680-9348-3109670fd514\") " pod="hostpath-provisioner/csi-hostpathplugin-57krf" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.926180 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-s9pl9" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.950060 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-9wgts" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.950745 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-57krf" Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.955949 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:47:32 crc kubenswrapper[4833]: E0217 13:47:32.957216 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:47:33.457200734 +0000 UTC m=+143.092300167 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:32 crc kubenswrapper[4833]: W0217 13:47:32.977015 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc251713f_cec1_4ae0_a70b_553ab3b74a5b.slice/crio-a67eecb75310c803ba160883a424868374465669c4641cb2ec7007bad0225c97 WatchSource:0}: Error finding container a67eecb75310c803ba160883a424868374465669c4641cb2ec7007bad0225c97: Status 404 returned error can't find the container with id a67eecb75310c803ba160883a424868374465669c4641cb2ec7007bad0225c97 Feb 17 13:47:32 crc kubenswrapper[4833]: I0217 13:47:32.981595 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-87pnl" Feb 17 13:47:33 crc kubenswrapper[4833]: W0217 13:47:33.011391 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10816227_9540_49c4_bd68_82e7810b9e06.slice/crio-a9ffc883c5954f131b0ebe3da5f44d543260cd51c685df598c1b8582c658c3cf WatchSource:0}: Error finding container a9ffc883c5954f131b0ebe3da5f44d543260cd51c685df598c1b8582c658c3cf: Status 404 returned error can't find the container with id a9ffc883c5954f131b0ebe3da5f44d543260cd51c685df598c1b8582c658c3cf Feb 17 13:47:33 crc kubenswrapper[4833]: I0217 13:47:33.082605 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2ddlx\" (UID: \"16569a9d-7677-455d-87c6-7b2fb504b731\") " pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" Feb 17 13:47:33 crc kubenswrapper[4833]: E0217 13:47:33.082903 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:47:33.58289112 +0000 UTC m=+143.217990553 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2ddlx" (UID: "16569a9d-7677-455d-87c6-7b2fb504b731") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:33 crc kubenswrapper[4833]: I0217 13:47:33.091245 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-sgpfh"] Feb 17 13:47:33 crc kubenswrapper[4833]: I0217 13:47:33.091279 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2n6pb"] Feb 17 13:47:33 crc kubenswrapper[4833]: I0217 13:47:33.093644 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5d2h9" Feb 17 13:47:33 crc kubenswrapper[4833]: I0217 13:47:33.109007 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-h2rpb"] Feb 17 13:47:33 crc kubenswrapper[4833]: I0217 13:47:33.114867 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r7lt2"] Feb 17 13:47:33 crc kubenswrapper[4833]: I0217 13:47:33.116025 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-27vkt"] Feb 17 13:47:33 crc kubenswrapper[4833]: I0217 13:47:33.126913 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-2mlww"] Feb 17 13:47:33 crc kubenswrapper[4833]: I0217 13:47:33.183460 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:47:33 crc kubenswrapper[4833]: E0217 13:47:33.184502 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:47:33.68448129 +0000 UTC m=+143.319580723 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:33 crc kubenswrapper[4833]: W0217 13:47:33.284256 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec012f89_9c82_4c49_9a7e_892979946444.slice/crio-311d4da38b97cd83d3a1991685467caa9b1591fb41fdb370543184b8c6156683 WatchSource:0}: Error finding container 311d4da38b97cd83d3a1991685467caa9b1591fb41fdb370543184b8c6156683: Status 404 returned error can't find the container with id 311d4da38b97cd83d3a1991685467caa9b1591fb41fdb370543184b8c6156683 Feb 17 13:47:33 crc kubenswrapper[4833]: I0217 13:47:33.285091 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2ddlx\" (UID: \"16569a9d-7677-455d-87c6-7b2fb504b731\") " pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" Feb 17 13:47:33 crc kubenswrapper[4833]: E0217 13:47:33.285405 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:47:33.78539441 +0000 UTC m=+143.420493843 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2ddlx" (UID: "16569a9d-7677-455d-87c6-7b2fb504b731") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:33 crc kubenswrapper[4833]: W0217 13:47:33.286955 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec31a3ca_f7c7_4af9_b9ea_57f9eb4a8815.slice/crio-e29b4deb045c4362a3635b22a035209e3126022adc228f6f18a22d8ea820987f WatchSource:0}: Error finding container e29b4deb045c4362a3635b22a035209e3126022adc228f6f18a22d8ea820987f: Status 404 returned error can't find the container with id e29b4deb045c4362a3635b22a035209e3126022adc228f6f18a22d8ea820987f Feb 17 13:47:33 crc kubenswrapper[4833]: I0217 13:47:33.343300 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pc9vj"] Feb 17 13:47:33 crc kubenswrapper[4833]: I0217 13:47:33.365911 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gvzml"] Feb 17 13:47:33 crc kubenswrapper[4833]: I0217 13:47:33.370348 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-fps2n"] Feb 17 13:47:33 crc kubenswrapper[4833]: I0217 13:47:33.386470 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:47:33 crc kubenswrapper[4833]: E0217 13:47:33.386580 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:47:33.886558007 +0000 UTC m=+143.521657450 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:33 crc kubenswrapper[4833]: I0217 13:47:33.386792 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2ddlx\" (UID: \"16569a9d-7677-455d-87c6-7b2fb504b731\") " pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" Feb 17 13:47:33 crc kubenswrapper[4833]: E0217 13:47:33.387075 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:47:33.887067882 +0000 UTC m=+143.522167315 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2ddlx" (UID: "16569a9d-7677-455d-87c6-7b2fb504b731") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:33 crc kubenswrapper[4833]: I0217 13:47:33.468185 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522265-s4986"] Feb 17 13:47:33 crc kubenswrapper[4833]: I0217 13:47:33.489396 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:47:33 crc kubenswrapper[4833]: E0217 13:47:33.489711 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:47:33.989695743 +0000 UTC m=+143.624795176 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:33 crc kubenswrapper[4833]: I0217 13:47:33.544225 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cvhc2"] Feb 17 13:47:33 crc kubenswrapper[4833]: I0217 13:47:33.566349 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6nkfm"] Feb 17 13:47:33 crc kubenswrapper[4833]: I0217 13:47:33.570939 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-chdnk"] Feb 17 13:47:33 crc kubenswrapper[4833]: I0217 13:47:33.591488 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2ddlx\" (UID: \"16569a9d-7677-455d-87c6-7b2fb504b731\") " pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" Feb 17 13:47:33 crc kubenswrapper[4833]: E0217 13:47:33.591843 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:47:34.091828489 +0000 UTC m=+143.726927922 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2ddlx" (UID: "16569a9d-7677-455d-87c6-7b2fb504b731") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:33 crc kubenswrapper[4833]: I0217 13:47:33.603704 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-4wnpt" podStartSLOduration=123.603688022 podStartE2EDuration="2m3.603688022s" podCreationTimestamp="2026-02-17 13:45:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:47:33.601467886 +0000 UTC m=+143.236567339" watchObservedRunningTime="2026-02-17 13:47:33.603688022 +0000 UTC m=+143.238787445" Feb 17 13:47:33 crc kubenswrapper[4833]: I0217 13:47:33.608172 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-knm6z"] Feb 17 13:47:33 crc kubenswrapper[4833]: I0217 13:47:33.609813 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xj6dt"] Feb 17 13:47:33 crc kubenswrapper[4833]: I0217 13:47:33.611509 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7r5vw"] Feb 17 13:47:33 crc kubenswrapper[4833]: I0217 13:47:33.692614 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:47:33 crc kubenswrapper[4833]: E0217 13:47:33.693169 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:47:34.19311202 +0000 UTC m=+143.828211453 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:33 crc kubenswrapper[4833]: I0217 13:47:33.736982 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-57krf"] Feb 17 13:47:33 crc kubenswrapper[4833]: I0217 13:47:33.770104 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-s9pl9"] Feb 17 13:47:33 crc kubenswrapper[4833]: W0217 13:47:33.773967 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9bd21e79_5975_4f5f_976e_94a05e2df000.slice/crio-a6d5a131b35f29aa7391c01cb00e6e9d685924249dcb8f42321cb183d769ffcb WatchSource:0}: Error finding container a6d5a131b35f29aa7391c01cb00e6e9d685924249dcb8f42321cb183d769ffcb: Status 404 returned error can't find the container with id a6d5a131b35f29aa7391c01cb00e6e9d685924249dcb8f42321cb183d769ffcb Feb 17 13:47:33 crc kubenswrapper[4833]: I0217 13:47:33.780792 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-9wgts"] Feb 17 13:47:33 crc kubenswrapper[4833]: W0217 13:47:33.788080 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00b43a88_457b_4c5a_ab44_2af8e47b2c2d.slice/crio-4e51e7a81339de9800a6d1abec54dfb9594aca1921052aa674852574cc39720f WatchSource:0}: Error finding container 4e51e7a81339de9800a6d1abec54dfb9594aca1921052aa674852574cc39720f: Status 404 returned error can't find the container with id 4e51e7a81339de9800a6d1abec54dfb9594aca1921052aa674852574cc39720f Feb 17 13:47:33 crc kubenswrapper[4833]: I0217 13:47:33.794481 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2ddlx\" (UID: \"16569a9d-7677-455d-87c6-7b2fb504b731\") " pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" Feb 17 13:47:33 crc kubenswrapper[4833]: E0217 13:47:33.794931 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:47:34.294910466 +0000 UTC m=+143.930009909 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2ddlx" (UID: "16569a9d-7677-455d-87c6-7b2fb504b731") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:33 crc kubenswrapper[4833]: W0217 13:47:33.807000 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda39a9177_9838_434f_a2e0_c8359ff146fe.slice/crio-26b13cba661e24bdb850b0646270321882bd5fe2bfba64029cb56f9bd4e69075 WatchSource:0}: Error finding container 26b13cba661e24bdb850b0646270321882bd5fe2bfba64029cb56f9bd4e69075: Status 404 returned error can't find the container with id 26b13cba661e24bdb850b0646270321882bd5fe2bfba64029cb56f9bd4e69075 Feb 17 13:47:33 crc kubenswrapper[4833]: I0217 13:47:33.818243 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-k497l"] Feb 17 13:47:33 crc kubenswrapper[4833]: W0217 13:47:33.824517 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a0991bf_b28c_471a_8c28_24b461784fdd.slice/crio-493b4ee223cdb0b05fca2936e2237a81c2d5cc87926ef55663a083692a92bcb3 WatchSource:0}: Error finding container 493b4ee223cdb0b05fca2936e2237a81c2d5cc87926ef55663a083692a92bcb3: Status 404 returned error can't find the container with id 493b4ee223cdb0b05fca2936e2237a81c2d5cc87926ef55663a083692a92bcb3 Feb 17 13:47:33 crc kubenswrapper[4833]: I0217 13:47:33.877214 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-wc6g2" event={"ID":"6a0d00bf-a97f-4ccb-ba05-aba2a220014a","Type":"ContainerStarted","Data":"ef274c83bf5986c8264c7b50ebec1d1580ca06c8ba6b85c8e4974e6b2daab71e"} Feb 17 13:47:33 crc kubenswrapper[4833]: I0217 13:47:33.882311 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-dmmfm" event={"ID":"9d9814af-c66a-49fd-a3ca-814e8b0caf48","Type":"ContainerStarted","Data":"ba3466fb05fe0fb4ebfaa792d1375d9bc3ef6b3b0c860974fbde6ee0d5c48d30"} Feb 17 13:47:33 crc kubenswrapper[4833]: I0217 13:47:33.894574 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-5d2h9"] Feb 17 13:47:33 crc kubenswrapper[4833]: I0217 13:47:33.894985 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:47:33 crc kubenswrapper[4833]: E0217 13:47:33.895305 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:47:34.39529041 +0000 UTC m=+144.030389843 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:33 crc kubenswrapper[4833]: I0217 13:47:33.915564 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xdsld" event={"ID":"e6036e20-bd82-411b-a2f5-0806db078ac0","Type":"ContainerStarted","Data":"f1373fe3fb11933473b5d782e3f5c3f3f27c21ba14f96c9a9faa39966a69f98b"} Feb 17 13:47:33 crc kubenswrapper[4833]: I0217 13:47:33.915877 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xdsld" event={"ID":"e6036e20-bd82-411b-a2f5-0806db078ac0","Type":"ContainerStarted","Data":"131a2c0d9af4b4121556d4d141ef0988d36bd0acef6615f2b3ec762dfaf58925"} Feb 17 13:47:33 crc kubenswrapper[4833]: I0217 13:47:33.916173 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xdsld" Feb 17 13:47:33 crc kubenswrapper[4833]: I0217 13:47:33.916319 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-7dwhv"] Feb 17 13:47:33 crc kubenswrapper[4833]: I0217 13:47:33.921380 4833 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-xdsld container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Feb 17 13:47:33 crc kubenswrapper[4833]: I0217 13:47:33.921418 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xdsld" podUID="e6036e20-bd82-411b-a2f5-0806db078ac0" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Feb 17 13:47:33 crc kubenswrapper[4833]: I0217 13:47:33.952842 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r7lt2" event={"ID":"04af8f3e-80ec-462f-bbda-a1d7e1ebd37f","Type":"ContainerStarted","Data":"6fa87fef31fcaf247497f0103d8ac3b9d485814d6cd5682c09e90f4fd634a6ca"} Feb 17 13:47:33 crc kubenswrapper[4833]: I0217 13:47:33.954023 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-27vkt" event={"ID":"ec012f89-9c82-4c49-9a7e-892979946444","Type":"ContainerStarted","Data":"311d4da38b97cd83d3a1991685467caa9b1591fb41fdb370543184b8c6156683"} Feb 17 13:47:33 crc kubenswrapper[4833]: I0217 13:47:33.955649 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nsksg" event={"ID":"12eba345-07f7-4472-8a45-d0bd87317b0a","Type":"ContainerStarted","Data":"bcaffc0ba32fef3109f3d19b44bb087bb899df9be1b0b8303fded2274d701725"} Feb 17 13:47:33 crc kubenswrapper[4833]: I0217 13:47:33.965593 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-s9pl9" event={"ID":"b8e2b9ad-4101-435c-b61f-c105ceb732eb","Type":"ContainerStarted","Data":"f80d11f628576b970bd86855870f22932b4cc783985f72acb0ff2b257d71cd92"} Feb 17 13:47:33 crc kubenswrapper[4833]: I0217 13:47:33.969357 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xj6dt" event={"ID":"bf443d79-e768-4da2-b385-e6b8072cb88e","Type":"ContainerStarted","Data":"545c11e99bf6d1184b887f3cb609168ca84e4158b37119abcbd09be76df71d02"} Feb 17 13:47:33 crc kubenswrapper[4833]: I0217 13:47:33.988054 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-xwz2m" event={"ID":"b617c42f-c749-41e5-a305-692a4c631656","Type":"ContainerStarted","Data":"ac5d722540308001923896959a864291127c6e8f7d157d23be8d8dc01b7f0d51"} Feb 17 13:47:33 crc kubenswrapper[4833]: I0217 13:47:33.996884 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2ddlx\" (UID: \"16569a9d-7677-455d-87c6-7b2fb504b731\") " pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" Feb 17 13:47:33 crc kubenswrapper[4833]: E0217 13:47:33.997152 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:47:34.497140408 +0000 UTC m=+144.132239831 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2ddlx" (UID: "16569a9d-7677-455d-87c6-7b2fb504b731") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:34 crc kubenswrapper[4833]: I0217 13:47:34.010220 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-jnj5g" event={"ID":"87a6deb5-a486-4485-9e62-e9eb946878ca","Type":"ContainerStarted","Data":"0f2521fcff4d7d645a1d93156f6a1700ca622233b28d79fe454ecf22c0e18b44"} Feb 17 13:47:34 crc kubenswrapper[4833]: I0217 13:47:34.014248 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7r5vw" event={"ID":"40132c32-a2e7-4a23-a0db-4d5a389e8df1","Type":"ContainerStarted","Data":"7071a689caf068938cb2ba2bf725fd428e27aaf9ac8096845b68cd0f47a101b2"} Feb 17 13:47:34 crc kubenswrapper[4833]: I0217 13:47:34.071270 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-h2rpb" event={"ID":"234db162-a495-45d1-8af9-7e2deaa2763c","Type":"ContainerStarted","Data":"f03d83af7de5eebd63f90d26ee64329f9b501f84488fcf8f1ff61a153f26ac30"} Feb 17 13:47:34 crc kubenswrapper[4833]: I0217 13:47:34.076623 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-87pnl"] Feb 17 13:47:34 crc kubenswrapper[4833]: I0217 13:47:34.097835 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:47:34 crc kubenswrapper[4833]: E0217 13:47:34.098032 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:47:34.598011347 +0000 UTC m=+144.233110780 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:34 crc kubenswrapper[4833]: I0217 13:47:34.098604 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2ddlx\" (UID: \"16569a9d-7677-455d-87c6-7b2fb504b731\") " pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" Feb 17 13:47:34 crc kubenswrapper[4833]: E0217 13:47:34.102750 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:47:34.602730717 +0000 UTC m=+144.237830190 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2ddlx" (UID: "16569a9d-7677-455d-87c6-7b2fb504b731") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:34 crc kubenswrapper[4833]: I0217 13:47:34.106562 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gvzml" event={"ID":"502603f9-5374-4fa7-8398-1f9d931e370f","Type":"ContainerStarted","Data":"f25497e8942bfa65fbdb96265762397f5b4e47cc82782cdab0784bb1820c8123"} Feb 17 13:47:34 crc kubenswrapper[4833]: I0217 13:47:34.114587 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-9wgts" event={"ID":"5adc4cc4-935c-4a7b-a68c-442269bc7c5d","Type":"ContainerStarted","Data":"446024a4cfe7bb760e0542f73f97bd99b17e76383b724c5bb783ff684b865250"} Feb 17 13:47:34 crc kubenswrapper[4833]: I0217 13:47:34.125096 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-57krf" event={"ID":"80d59877-4f3e-4680-9348-3109670fd514","Type":"ContainerStarted","Data":"000bfcee47e1aa5437fb2f9f4cd5832717d0c1021fb5b95ef68e0eefbf0a779a"} Feb 17 13:47:34 crc kubenswrapper[4833]: I0217 13:47:34.130613 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-btl28" podStartSLOduration=123.130592205 podStartE2EDuration="2m3.130592205s" podCreationTimestamp="2026-02-17 13:45:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:47:34.128622407 +0000 UTC m=+143.763721840" watchObservedRunningTime="2026-02-17 13:47:34.130592205 +0000 UTC m=+143.765691628" Feb 17 13:47:34 crc kubenswrapper[4833]: I0217 13:47:34.137665 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pc9vj" event={"ID":"e90aa002-ad59-43a3-88db-f2e03408f40d","Type":"ContainerStarted","Data":"f6e50155b1768c3a8032df15a57a450adfae7f2e7ebe80861d84ce7f59342d24"} Feb 17 13:47:34 crc kubenswrapper[4833]: I0217 13:47:34.141864 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6nkfm" event={"ID":"a39a9177-9838-434f-a2e0-c8359ff146fe","Type":"ContainerStarted","Data":"26b13cba661e24bdb850b0646270321882bd5fe2bfba64029cb56f9bd4e69075"} Feb 17 13:47:34 crc kubenswrapper[4833]: I0217 13:47:34.146849 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-sgpfh" event={"ID":"ec31a3ca-f7c7-4af9-b9ea-57f9eb4a8815","Type":"ContainerStarted","Data":"e29b4deb045c4362a3635b22a035209e3126022adc228f6f18a22d8ea820987f"} Feb 17 13:47:34 crc kubenswrapper[4833]: I0217 13:47:34.150698 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-lsrj4" event={"ID":"7998df5b-fb36-4561-870d-80e55c87facd","Type":"ContainerStarted","Data":"577024bfc2f2a54a428d3fd3670d9d6261f27db0dc95a69123242421a10daf6b"} Feb 17 13:47:34 crc kubenswrapper[4833]: I0217 13:47:34.155335 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cvhc2" event={"ID":"9bd21e79-5975-4f5f-976e-94a05e2df000","Type":"ContainerStarted","Data":"a6d5a131b35f29aa7391c01cb00e6e9d685924249dcb8f42321cb183d769ffcb"} Feb 17 13:47:34 crc kubenswrapper[4833]: I0217 13:47:34.175975 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xdxhg" event={"ID":"f1cb67f7-508e-44b5-9fa7-bb8f811812f8","Type":"ContainerStarted","Data":"69d81e89890b87abc84c2b3642ad91163713b617f5ce36f4b349975a6d30595a"} Feb 17 13:47:34 crc kubenswrapper[4833]: I0217 13:47:34.179244 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-znglf" event={"ID":"10816227-9540-49c4-bd68-82e7810b9e06","Type":"ContainerStarted","Data":"a9ffc883c5954f131b0ebe3da5f44d543260cd51c685df598c1b8582c658c3cf"} Feb 17 13:47:34 crc kubenswrapper[4833]: I0217 13:47:34.181838 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-stnj2" event={"ID":"c1e5bc60-f7d5-436d-9298-3099adb6bc0a","Type":"ContainerStarted","Data":"ebb55e6a965e341d151163228b3d289782cde2323c961db9a9ae15d89bf73dd3"} Feb 17 13:47:34 crc kubenswrapper[4833]: I0217 13:47:34.181884 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-stnj2" event={"ID":"c1e5bc60-f7d5-436d-9298-3099adb6bc0a","Type":"ContainerStarted","Data":"885fdf29d6d32302e8dfeecd1615ccfc15137f3aacececcba2288c8a7444bd8d"} Feb 17 13:47:34 crc kubenswrapper[4833]: I0217 13:47:34.187891 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-dmmfm" Feb 17 13:47:34 crc kubenswrapper[4833]: I0217 13:47:34.188338 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hjdrd" event={"ID":"af0f5f19-3c55-45a1-9a8f-66e55fd46683","Type":"ContainerStarted","Data":"e55f9548113a80da54db6647ad10338618ec0e059b342fce869a735cef326f0b"} Feb 17 13:47:34 crc kubenswrapper[4833]: I0217 13:47:34.193136 4833 patch_prober.go:28] interesting pod/router-default-5444994796-dmmfm container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 17 13:47:34 crc kubenswrapper[4833]: I0217 13:47:34.193193 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dmmfm" podUID="9d9814af-c66a-49fd-a3ca-814e8b0caf48" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 17 13:47:34 crc kubenswrapper[4833]: I0217 13:47:34.204960 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:47:34 crc kubenswrapper[4833]: E0217 13:47:34.205982 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:47:34.705961826 +0000 UTC m=+144.341061259 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:34 crc kubenswrapper[4833]: I0217 13:47:34.216397 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-2mlww" event={"ID":"f5b88dcb-84d4-4311-8c01-860d17b444eb","Type":"ContainerStarted","Data":"89107903002ea13b548b1aff6a4273791da48982dd1d8cafbc30179e8659017b"} Feb 17 13:47:34 crc kubenswrapper[4833]: I0217 13:47:34.223585 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2n6pb" event={"ID":"77fd0717-6d6c-46cf-a19a-29dce58a7176","Type":"ContainerStarted","Data":"0b8839aaa8820ca8eacfad2c297a495ce631ac99be96020a6353d77e0e93a337"} Feb 17 13:47:34 crc kubenswrapper[4833]: I0217 13:47:34.231012 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l9kx4" event={"ID":"c251713f-cec1-4ae0-a70b-553ab3b74a5b","Type":"ContainerStarted","Data":"a67eecb75310c803ba160883a424868374465669c4641cb2ec7007bad0225c97"} Feb 17 13:47:34 crc kubenswrapper[4833]: I0217 13:47:34.250090 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-knm6z" event={"ID":"7a0991bf-b28c-471a-8c28-24b461784fdd","Type":"ContainerStarted","Data":"493b4ee223cdb0b05fca2936e2237a81c2d5cc87926ef55663a083692a92bcb3"} Feb 17 13:47:34 crc kubenswrapper[4833]: I0217 13:47:34.255634 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522265-s4986" event={"ID":"aa2d375b-2949-4037-a7f2-33fff8c9fde6","Type":"ContainerStarted","Data":"3c4c6403629471dbf59166ac6e5f788b63b3588b4462f1b350862b5d89156d46"} Feb 17 13:47:34 crc kubenswrapper[4833]: I0217 13:47:34.262269 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-fps2n" event={"ID":"c91cc697-a95c-4c07-8750-878937f50446","Type":"ContainerStarted","Data":"7cead6687d5c379e528995ad9ef9bf95d5b65ded92068ce4eb497957795e855c"} Feb 17 13:47:34 crc kubenswrapper[4833]: I0217 13:47:34.267383 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-chdnk" event={"ID":"00b43a88-457b-4c5a-ab44-2af8e47b2c2d","Type":"ContainerStarted","Data":"4e51e7a81339de9800a6d1abec54dfb9594aca1921052aa674852574cc39720f"} Feb 17 13:47:34 crc kubenswrapper[4833]: I0217 13:47:34.273396 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-slqbd" event={"ID":"3e99f9e5-e0f4-4444-8303-f69571809455","Type":"ContainerStarted","Data":"66252b1615bf7cf337b9bed0ebe9accf8ad26593e413d208c0dc99a32809e954"} Feb 17 13:47:34 crc kubenswrapper[4833]: I0217 13:47:34.274200 4833 patch_prober.go:28] interesting pod/console-operator-58897d9998-4wnpt container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/readyz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Feb 17 13:47:34 crc kubenswrapper[4833]: I0217 13:47:34.274556 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-4wnpt" podUID="116808cb-b59f-4f18-8fa1-a383fba544d9" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/readyz\": dial tcp 10.217.0.8:8443: connect: connection refused" Feb 17 13:47:34 crc kubenswrapper[4833]: I0217 13:47:34.275748 4833 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-btl28 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Feb 17 13:47:34 crc kubenswrapper[4833]: I0217 13:47:34.275800 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-btl28" podUID="3d4005d2-d765-4a8e-9b85-8c49d8238995" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Feb 17 13:47:34 crc kubenswrapper[4833]: I0217 13:47:34.307130 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2ddlx\" (UID: \"16569a9d-7677-455d-87c6-7b2fb504b731\") " pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" Feb 17 13:47:34 crc kubenswrapper[4833]: E0217 13:47:34.311026 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:47:34.811007009 +0000 UTC m=+144.446106522 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2ddlx" (UID: "16569a9d-7677-455d-87c6-7b2fb504b731") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:34 crc kubenswrapper[4833]: I0217 13:47:34.408269 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:47:34 crc kubenswrapper[4833]: E0217 13:47:34.408486 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:47:34.908457385 +0000 UTC m=+144.543556818 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:34 crc kubenswrapper[4833]: I0217 13:47:34.408848 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2ddlx\" (UID: \"16569a9d-7677-455d-87c6-7b2fb504b731\") " pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" Feb 17 13:47:34 crc kubenswrapper[4833]: E0217 13:47:34.409979 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:47:34.90996325 +0000 UTC m=+144.545062693 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2ddlx" (UID: "16569a9d-7677-455d-87c6-7b2fb504b731") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:34 crc kubenswrapper[4833]: I0217 13:47:34.449159 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g2gvf" podStartSLOduration=123.449142735 podStartE2EDuration="2m3.449142735s" podCreationTimestamp="2026-02-17 13:45:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:47:34.418938197 +0000 UTC m=+144.054037650" watchObservedRunningTime="2026-02-17 13:47:34.449142735 +0000 UTC m=+144.084242168" Feb 17 13:47:34 crc kubenswrapper[4833]: I0217 13:47:34.514891 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:47:34 crc kubenswrapper[4833]: E0217 13:47:34.515062 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:47:35.015023053 +0000 UTC m=+144.650122486 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:34 crc kubenswrapper[4833]: I0217 13:47:34.515348 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2ddlx\" (UID: \"16569a9d-7677-455d-87c6-7b2fb504b731\") " pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" Feb 17 13:47:34 crc kubenswrapper[4833]: E0217 13:47:34.516010 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:47:35.015987532 +0000 UTC m=+144.651086965 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2ddlx" (UID: "16569a9d-7677-455d-87c6-7b2fb504b731") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:34 crc kubenswrapper[4833]: I0217 13:47:34.570367 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g2gvf" Feb 17 13:47:34 crc kubenswrapper[4833]: I0217 13:47:34.629154 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:47:34 crc kubenswrapper[4833]: E0217 13:47:34.629247 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:47:35.129223388 +0000 UTC m=+144.764322821 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:34 crc kubenswrapper[4833]: I0217 13:47:34.629299 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2ddlx\" (UID: \"16569a9d-7677-455d-87c6-7b2fb504b731\") " pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" Feb 17 13:47:34 crc kubenswrapper[4833]: E0217 13:47:34.629653 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:47:35.12964004 +0000 UTC m=+144.764739473 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2ddlx" (UID: "16569a9d-7677-455d-87c6-7b2fb504b731") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:34 crc kubenswrapper[4833]: I0217 13:47:34.693932 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-jnj5g" podStartSLOduration=124.693913731 podStartE2EDuration="2m4.693913731s" podCreationTimestamp="2026-02-17 13:45:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:47:34.692099697 +0000 UTC m=+144.327199140" watchObservedRunningTime="2026-02-17 13:47:34.693913731 +0000 UTC m=+144.329013164" Feb 17 13:47:34 crc kubenswrapper[4833]: I0217 13:47:34.694855 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-lsrj4" podStartSLOduration=5.694848729 podStartE2EDuration="5.694848729s" podCreationTimestamp="2026-02-17 13:47:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:47:34.652123009 +0000 UTC m=+144.287222442" watchObservedRunningTime="2026-02-17 13:47:34.694848729 +0000 UTC m=+144.329948162" Feb 17 13:47:34 crc kubenswrapper[4833]: I0217 13:47:34.744229 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:47:34 crc kubenswrapper[4833]: E0217 13:47:34.744632 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:47:35.244613718 +0000 UTC m=+144.879713161 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:34 crc kubenswrapper[4833]: I0217 13:47:34.754391 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xdsld" podStartSLOduration=123.754366788 podStartE2EDuration="2m3.754366788s" podCreationTimestamp="2026-02-17 13:45:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:47:34.743956369 +0000 UTC m=+144.379055832" watchObservedRunningTime="2026-02-17 13:47:34.754366788 +0000 UTC m=+144.389466221" Feb 17 13:47:34 crc kubenswrapper[4833]: I0217 13:47:34.789544 4833 csr.go:261] certificate signing request csr-n68hh is approved, waiting to be issued Feb 17 13:47:34 crc kubenswrapper[4833]: I0217 13:47:34.789994 4833 csr.go:257] certificate signing request csr-n68hh is issued Feb 17 13:47:34 crc kubenswrapper[4833]: I0217 13:47:34.834294 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-xwz2m" podStartSLOduration=124.834277044 podStartE2EDuration="2m4.834277044s" podCreationTimestamp="2026-02-17 13:45:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:47:34.83347543 +0000 UTC m=+144.468574863" watchObservedRunningTime="2026-02-17 13:47:34.834277044 +0000 UTC m=+144.469376477" Feb 17 13:47:34 crc kubenswrapper[4833]: I0217 13:47:34.835989 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-dmmfm" podStartSLOduration=123.835982165 podStartE2EDuration="2m3.835982165s" podCreationTimestamp="2026-02-17 13:45:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:47:34.775209488 +0000 UTC m=+144.410308931" watchObservedRunningTime="2026-02-17 13:47:34.835982165 +0000 UTC m=+144.471081598" Feb 17 13:47:34 crc kubenswrapper[4833]: I0217 13:47:34.846066 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2ddlx\" (UID: \"16569a9d-7677-455d-87c6-7b2fb504b731\") " pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" Feb 17 13:47:34 crc kubenswrapper[4833]: E0217 13:47:34.846341 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:47:35.346331442 +0000 UTC m=+144.981430865 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2ddlx" (UID: "16569a9d-7677-455d-87c6-7b2fb504b731") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:34 crc kubenswrapper[4833]: I0217 13:47:34.894211 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hjdrd" podStartSLOduration=125.894194524 podStartE2EDuration="2m5.894194524s" podCreationTimestamp="2026-02-17 13:45:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:47:34.893469532 +0000 UTC m=+144.528568965" watchObservedRunningTime="2026-02-17 13:47:34.894194524 +0000 UTC m=+144.529293957" Feb 17 13:47:34 crc kubenswrapper[4833]: I0217 13:47:34.948003 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:47:34 crc kubenswrapper[4833]: E0217 13:47:34.948429 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:47:35.448415876 +0000 UTC m=+145.083515309 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:34 crc kubenswrapper[4833]: I0217 13:47:34.977616 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nsksg" podStartSLOduration=124.977597513 podStartE2EDuration="2m4.977597513s" podCreationTimestamp="2026-02-17 13:45:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:47:34.972974886 +0000 UTC m=+144.608074319" watchObservedRunningTime="2026-02-17 13:47:34.977597513 +0000 UTC m=+144.612696946" Feb 17 13:47:35 crc kubenswrapper[4833]: I0217 13:47:35.049783 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2ddlx\" (UID: \"16569a9d-7677-455d-87c6-7b2fb504b731\") " pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" Feb 17 13:47:35 crc kubenswrapper[4833]: E0217 13:47:35.050914 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:47:35.550879862 +0000 UTC m=+145.185979295 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2ddlx" (UID: "16569a9d-7677-455d-87c6-7b2fb504b731") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:35 crc kubenswrapper[4833]: I0217 13:47:35.161623 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:47:35 crc kubenswrapper[4833]: E0217 13:47:35.162015 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:47:35.661997445 +0000 UTC m=+145.297096888 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:35 crc kubenswrapper[4833]: I0217 13:47:35.200272 4833 patch_prober.go:28] interesting pod/router-default-5444994796-dmmfm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 13:47:35 crc kubenswrapper[4833]: [-]has-synced failed: reason withheld Feb 17 13:47:35 crc kubenswrapper[4833]: [+]process-running ok Feb 17 13:47:35 crc kubenswrapper[4833]: healthz check failed Feb 17 13:47:35 crc kubenswrapper[4833]: I0217 13:47:35.200334 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dmmfm" podUID="9d9814af-c66a-49fd-a3ca-814e8b0caf48" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 13:47:35 crc kubenswrapper[4833]: I0217 13:47:35.264909 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2ddlx\" (UID: \"16569a9d-7677-455d-87c6-7b2fb504b731\") " pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" Feb 17 13:47:35 crc kubenswrapper[4833]: E0217 13:47:35.265439 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:47:35.76542194 +0000 UTC m=+145.400521373 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2ddlx" (UID: "16569a9d-7677-455d-87c6-7b2fb504b731") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:35 crc kubenswrapper[4833]: I0217 13:47:35.326661 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nsksg" event={"ID":"12eba345-07f7-4472-8a45-d0bd87317b0a","Type":"ContainerStarted","Data":"d09519c0cde0cdddda2cfd082198b95002d92676089a96e9218c7d1c029e96e7"} Feb 17 13:47:35 crc kubenswrapper[4833]: I0217 13:47:35.329264 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-87pnl" event={"ID":"6d2e32a4-51dc-4406-834c-58392b4727b2","Type":"ContainerStarted","Data":"64ee1523e5356f1d1974606b859bc5264d686baf2a596619759fbdd16f5a582a"} Feb 17 13:47:35 crc kubenswrapper[4833]: I0217 13:47:35.356512 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vtrsq" event={"ID":"22bea415-75b9-4f36-a531-62617ed244c8","Type":"ContainerStarted","Data":"c4e5bc6f960790de2e7df373e1cde5a797bffe6dd452114bba342ffc68c4f608"} Feb 17 13:47:35 crc kubenswrapper[4833]: I0217 13:47:35.366389 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:47:35 crc kubenswrapper[4833]: E0217 13:47:35.367599 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:47:35.867579126 +0000 UTC m=+145.502678569 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:35 crc kubenswrapper[4833]: I0217 13:47:35.377998 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vtrsq" podStartSLOduration=124.377978286 podStartE2EDuration="2m4.377978286s" podCreationTimestamp="2026-02-17 13:45:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:47:35.376851342 +0000 UTC m=+145.011950775" watchObservedRunningTime="2026-02-17 13:47:35.377978286 +0000 UTC m=+145.013077719" Feb 17 13:47:35 crc kubenswrapper[4833]: I0217 13:47:35.398881 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-chdnk" event={"ID":"00b43a88-457b-4c5a-ab44-2af8e47b2c2d","Type":"ContainerStarted","Data":"89ccf1aec0ba6c155e5b41191854adced7a4e3b02270eeb73b05e536d6ecbc06"} Feb 17 13:47:35 crc kubenswrapper[4833]: I0217 13:47:35.402859 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cvhc2" Feb 17 13:47:35 crc kubenswrapper[4833]: I0217 13:47:35.409318 4833 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-cvhc2 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:5443/healthz\": dial tcp 10.217.0.36:5443: connect: connection refused" start-of-body= Feb 17 13:47:35 crc kubenswrapper[4833]: I0217 13:47:35.409390 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cvhc2" podUID="9bd21e79-5975-4f5f-976e-94a05e2df000" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.36:5443/healthz\": dial tcp 10.217.0.36:5443: connect: connection refused" Feb 17 13:47:35 crc kubenswrapper[4833]: I0217 13:47:35.426076 4833 generic.go:334] "Generic (PLEG): container finished" podID="d5560b0e-f2b1-469b-b989-f2abce8e9b8b" containerID="fd3071c597b879f320f688f7a80c9b4e6d1912df61650bf95a80984c769893e5" exitCode=0 Feb 17 13:47:35 crc kubenswrapper[4833]: I0217 13:47:35.426163 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-b8pbt" event={"ID":"d5560b0e-f2b1-469b-b989-f2abce8e9b8b","Type":"ContainerDied","Data":"fd3071c597b879f320f688f7a80c9b4e6d1912df61650bf95a80984c769893e5"} Feb 17 13:47:35 crc kubenswrapper[4833]: I0217 13:47:35.435379 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cvhc2" podStartSLOduration=124.435359221 podStartE2EDuration="2m4.435359221s" podCreationTimestamp="2026-02-17 13:45:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:47:35.424620632 +0000 UTC m=+145.059720065" watchObservedRunningTime="2026-02-17 13:47:35.435359221 +0000 UTC m=+145.070458654" Feb 17 13:47:35 crc kubenswrapper[4833]: I0217 13:47:35.441267 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-s9pl9" event={"ID":"b8e2b9ad-4101-435c-b61f-c105ceb732eb","Type":"ContainerStarted","Data":"98a54a0af702053dfe531a124233dae47eebd903331e8c46f28df33551389569"} Feb 17 13:47:35 crc kubenswrapper[4833]: I0217 13:47:35.447936 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522265-s4986" event={"ID":"aa2d375b-2949-4037-a7f2-33fff8c9fde6","Type":"ContainerStarted","Data":"f83a98b66d78b7cfbe5b8f5486c4da0c5577934efbeede77345bc35544ade1f2"} Feb 17 13:47:35 crc kubenswrapper[4833]: I0217 13:47:35.459523 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-znglf" event={"ID":"10816227-9540-49c4-bd68-82e7810b9e06","Type":"ContainerStarted","Data":"0dd43ea95af2ccbf2157a7a52f7a2ec3e8fc2acae1823a01d0f2b640fee63b23"} Feb 17 13:47:35 crc kubenswrapper[4833]: I0217 13:47:35.459864 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-znglf" event={"ID":"10816227-9540-49c4-bd68-82e7810b9e06","Type":"ContainerStarted","Data":"93c86e9ad65664c7c619d92a67e2970d9ec3f7935835a1dadd76e52d231ee79f"} Feb 17 13:47:35 crc kubenswrapper[4833]: I0217 13:47:35.467902 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2ddlx\" (UID: \"16569a9d-7677-455d-87c6-7b2fb504b731\") " pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" Feb 17 13:47:35 crc kubenswrapper[4833]: E0217 13:47:35.468206 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:47:35.968196118 +0000 UTC m=+145.603295551 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2ddlx" (UID: "16569a9d-7677-455d-87c6-7b2fb504b731") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:35 crc kubenswrapper[4833]: I0217 13:47:35.469951 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-h2rpb" event={"ID":"234db162-a495-45d1-8af9-7e2deaa2763c","Type":"ContainerStarted","Data":"0ef06d9f485d047103a23cd9fca597e8910ce306f9d7e448f56373604ca3023c"} Feb 17 13:47:35 crc kubenswrapper[4833]: I0217 13:47:35.469988 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-h2rpb" Feb 17 13:47:35 crc kubenswrapper[4833]: I0217 13:47:35.472416 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-wc6g2" event={"ID":"6a0d00bf-a97f-4ccb-ba05-aba2a220014a","Type":"ContainerStarted","Data":"c17f370c2968a0ce9fb0862387cd8cd1a0f2782283629063472fd1c8ca15abb3"} Feb 17 13:47:35 crc kubenswrapper[4833]: I0217 13:47:35.481256 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r7lt2" event={"ID":"04af8f3e-80ec-462f-bbda-a1d7e1ebd37f","Type":"ContainerStarted","Data":"42131ce3ded6ba190a007c0a7e842eed4385497fb90a83fccc82e6b8f4bc951f"} Feb 17 13:47:35 crc kubenswrapper[4833]: I0217 13:47:35.478497 4833 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-h2rpb container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.22:6443/healthz\": dial tcp 10.217.0.22:6443: connect: connection refused" start-of-body= Feb 17 13:47:35 crc kubenswrapper[4833]: I0217 13:47:35.481399 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-h2rpb" podUID="234db162-a495-45d1-8af9-7e2deaa2763c" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.22:6443/healthz\": dial tcp 10.217.0.22:6443: connect: connection refused" Feb 17 13:47:35 crc kubenswrapper[4833]: I0217 13:47:35.500203 4833 generic.go:334] "Generic (PLEG): container finished" podID="f1cb67f7-508e-44b5-9fa7-bb8f811812f8" containerID="e4ca62cb1b1d66f0914373319294b37ed0057dab5873a98b1494cdfd76e0f2b2" exitCode=0 Feb 17 13:47:35 crc kubenswrapper[4833]: I0217 13:47:35.500274 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xdxhg" event={"ID":"f1cb67f7-508e-44b5-9fa7-bb8f811812f8","Type":"ContainerDied","Data":"e4ca62cb1b1d66f0914373319294b37ed0057dab5873a98b1494cdfd76e0f2b2"} Feb 17 13:47:35 crc kubenswrapper[4833]: I0217 13:47:35.532264 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-slqbd" event={"ID":"3e99f9e5-e0f4-4444-8303-f69571809455","Type":"ContainerStarted","Data":"db08d98d20f733cda3de8b1db54d1d0f949b843b886353e2e9df383e0f64307c"} Feb 17 13:47:35 crc kubenswrapper[4833]: I0217 13:47:35.538125 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5d2h9" event={"ID":"6ff7bf32-ea2f-46af-89c7-7467a9e48f06","Type":"ContainerStarted","Data":"db81835b78c10249e22fee50fa89ccdee195e4d21b92f054a4743c75363f1e36"} Feb 17 13:47:35 crc kubenswrapper[4833]: I0217 13:47:35.538179 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5d2h9" event={"ID":"6ff7bf32-ea2f-46af-89c7-7467a9e48f06","Type":"ContainerStarted","Data":"264b56be4c1e4b210aecfeb5776d9f7ea04e088dfb1db567128ea2326430f3e6"} Feb 17 13:47:35 crc kubenswrapper[4833]: I0217 13:47:35.549345 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-s9pl9" podStartSLOduration=6.549328289 podStartE2EDuration="6.549328289s" podCreationTimestamp="2026-02-17 13:47:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:47:35.500823567 +0000 UTC m=+145.135923000" watchObservedRunningTime="2026-02-17 13:47:35.549328289 +0000 UTC m=+145.184427722" Feb 17 13:47:35 crc kubenswrapper[4833]: I0217 13:47:35.556604 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-znglf" podStartSLOduration=124.556583565 podStartE2EDuration="2m4.556583565s" podCreationTimestamp="2026-02-17 13:45:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:47:35.549261817 +0000 UTC m=+145.184361250" watchObservedRunningTime="2026-02-17 13:47:35.556583565 +0000 UTC m=+145.191683008" Feb 17 13:47:35 crc kubenswrapper[4833]: I0217 13:47:35.569049 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:47:35 crc kubenswrapper[4833]: E0217 13:47:35.570145 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:47:36.070124168 +0000 UTC m=+145.705223601 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:35 crc kubenswrapper[4833]: I0217 13:47:35.582465 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29522265-s4986" podStartSLOduration=125.582451694 podStartE2EDuration="2m5.582451694s" podCreationTimestamp="2026-02-17 13:45:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:47:35.581284689 +0000 UTC m=+145.216384122" watchObservedRunningTime="2026-02-17 13:47:35.582451694 +0000 UTC m=+145.217551127" Feb 17 13:47:35 crc kubenswrapper[4833]: I0217 13:47:35.602633 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-h2rpb" podStartSLOduration=125.602618894 podStartE2EDuration="2m5.602618894s" podCreationTimestamp="2026-02-17 13:45:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:47:35.601882632 +0000 UTC m=+145.236982065" watchObservedRunningTime="2026-02-17 13:47:35.602618894 +0000 UTC m=+145.237718327" Feb 17 13:47:35 crc kubenswrapper[4833]: I0217 13:47:35.610590 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-k497l" event={"ID":"61861be4-c520-4f21-9a1b-4ad3bb11ba3c","Type":"ContainerStarted","Data":"c13851d0fc737c0c1cce21c2cd0e57baa86b90e6d8315ae9fafc0d7843c9d6d5"} Feb 17 13:47:35 crc kubenswrapper[4833]: I0217 13:47:35.622052 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-7dwhv" event={"ID":"06a44619-0a61-488d-a3ba-1ac282968063","Type":"ContainerStarted","Data":"f2c01d37318378fc085b7760d20bafad2b708280b0053bf9ee1e5f3d1a921498"} Feb 17 13:47:35 crc kubenswrapper[4833]: I0217 13:47:35.622194 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-7dwhv" event={"ID":"06a44619-0a61-488d-a3ba-1ac282968063","Type":"ContainerStarted","Data":"ae0dd24d0bd22c2c243f6cc145217e4a2dcac9c6344089401ea01467022e854e"} Feb 17 13:47:35 crc kubenswrapper[4833]: I0217 13:47:35.632476 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-slqbd" podStartSLOduration=124.632458061 podStartE2EDuration="2m4.632458061s" podCreationTimestamp="2026-02-17 13:45:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:47:35.623112303 +0000 UTC m=+145.258211736" watchObservedRunningTime="2026-02-17 13:47:35.632458061 +0000 UTC m=+145.267557514" Feb 17 13:47:35 crc kubenswrapper[4833]: I0217 13:47:35.655103 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7r5vw" event={"ID":"40132c32-a2e7-4a23-a0db-4d5a389e8df1","Type":"ContainerStarted","Data":"acbca642eba924a159f4d76c332c823beb85dd5cbf43bcabcf3b499c794311c1"} Feb 17 13:47:35 crc kubenswrapper[4833]: I0217 13:47:35.656021 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7r5vw" Feb 17 13:47:35 crc kubenswrapper[4833]: I0217 13:47:35.661254 4833 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-7r5vw container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Feb 17 13:47:35 crc kubenswrapper[4833]: I0217 13:47:35.661324 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7r5vw" podUID="40132c32-a2e7-4a23-a0db-4d5a389e8df1" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Feb 17 13:47:35 crc kubenswrapper[4833]: I0217 13:47:35.664074 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5d2h9" podStartSLOduration=124.664028909 podStartE2EDuration="2m4.664028909s" podCreationTimestamp="2026-02-17 13:45:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:47:35.662099912 +0000 UTC m=+145.297199355" watchObservedRunningTime="2026-02-17 13:47:35.664028909 +0000 UTC m=+145.299128342" Feb 17 13:47:35 crc kubenswrapper[4833]: I0217 13:47:35.673925 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-27vkt" event={"ID":"ec012f89-9c82-4c49-9a7e-892979946444","Type":"ContainerStarted","Data":"6828265fdcda093198803d40cb1106ea7a3aca7edea197f57316d785f857a093"} Feb 17 13:47:35 crc kubenswrapper[4833]: I0217 13:47:35.674071 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-27vkt" Feb 17 13:47:35 crc kubenswrapper[4833]: I0217 13:47:35.682847 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2ddlx\" (UID: \"16569a9d-7677-455d-87c6-7b2fb504b731\") " pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" Feb 17 13:47:35 crc kubenswrapper[4833]: I0217 13:47:35.685584 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-2mlww" event={"ID":"f5b88dcb-84d4-4311-8c01-860d17b444eb","Type":"ContainerStarted","Data":"f11dd44d4ea96acf09b17fc173b7ec24740b4947356ed0a29a6b34a90004fdd6"} Feb 17 13:47:35 crc kubenswrapper[4833]: E0217 13:47:35.687708 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:47:36.187690463 +0000 UTC m=+145.822789986 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2ddlx" (UID: "16569a9d-7677-455d-87c6-7b2fb504b731") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:35 crc kubenswrapper[4833]: I0217 13:47:35.695458 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-wc6g2" podStartSLOduration=124.695434563 podStartE2EDuration="2m4.695434563s" podCreationTimestamp="2026-02-17 13:45:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:47:35.692534297 +0000 UTC m=+145.327633730" watchObservedRunningTime="2026-02-17 13:47:35.695434563 +0000 UTC m=+145.330533996" Feb 17 13:47:35 crc kubenswrapper[4833]: I0217 13:47:35.709012 4833 generic.go:334] "Generic (PLEG): container finished" podID="c251713f-cec1-4ae0-a70b-553ab3b74a5b" containerID="5f9455dd11bf90b5db8852ff01508c7f1ae5437a27b0a7d9dfb73cd69c2c7b5e" exitCode=0 Feb 17 13:47:35 crc kubenswrapper[4833]: I0217 13:47:35.709120 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l9kx4" event={"ID":"c251713f-cec1-4ae0-a70b-553ab3b74a5b","Type":"ContainerDied","Data":"5f9455dd11bf90b5db8852ff01508c7f1ae5437a27b0a7d9dfb73cd69c2c7b5e"} Feb 17 13:47:35 crc kubenswrapper[4833]: I0217 13:47:35.715188 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-fps2n" event={"ID":"c91cc697-a95c-4c07-8750-878937f50446","Type":"ContainerStarted","Data":"c74a62c067107aebb1eeb82fd4ed90dc960dce9860b866cf4d358ff46cd8342a"} Feb 17 13:47:35 crc kubenswrapper[4833]: I0217 13:47:35.719718 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-fps2n" Feb 17 13:47:35 crc kubenswrapper[4833]: I0217 13:47:35.719807 4833 patch_prober.go:28] interesting pod/downloads-7954f5f757-fps2n container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Feb 17 13:47:35 crc kubenswrapper[4833]: I0217 13:47:35.719863 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fps2n" podUID="c91cc697-a95c-4c07-8750-878937f50446" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" Feb 17 13:47:35 crc kubenswrapper[4833]: I0217 13:47:35.720744 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r7lt2" podStartSLOduration=124.720725055 podStartE2EDuration="2m4.720725055s" podCreationTimestamp="2026-02-17 13:45:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:47:35.719975582 +0000 UTC m=+145.355075025" watchObservedRunningTime="2026-02-17 13:47:35.720725055 +0000 UTC m=+145.355824488" Feb 17 13:47:35 crc kubenswrapper[4833]: I0217 13:47:35.751192 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-sgpfh" event={"ID":"ec31a3ca-f7c7-4af9-b9ea-57f9eb4a8815","Type":"ContainerStarted","Data":"a09139f44e83cde08016423f0e6354baa16f89cd8fb4d8f828b44caeb6587fe0"} Feb 17 13:47:35 crc kubenswrapper[4833]: I0217 13:47:35.758676 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-7dwhv" podStartSLOduration=124.758658592 podStartE2EDuration="2m4.758658592s" podCreationTimestamp="2026-02-17 13:45:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:47:35.757147927 +0000 UTC m=+145.392247360" watchObservedRunningTime="2026-02-17 13:47:35.758658592 +0000 UTC m=+145.393758015" Feb 17 13:47:35 crc kubenswrapper[4833]: I0217 13:47:35.767328 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2n6pb" event={"ID":"77fd0717-6d6c-46cf-a19a-29dce58a7176","Type":"ContainerStarted","Data":"118de0cf90c51264761ca8af8c0a38210803f0454107c9b25d0a26b8dfe244d5"} Feb 17 13:47:35 crc kubenswrapper[4833]: I0217 13:47:35.782879 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pc9vj" event={"ID":"e90aa002-ad59-43a3-88db-f2e03408f40d","Type":"ContainerStarted","Data":"a6278de71ad90ffb9ba6a281bfa9878f72ee2097c0f8fe47d57a81993b83656e"} Feb 17 13:47:35 crc kubenswrapper[4833]: I0217 13:47:35.784582 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:47:35 crc kubenswrapper[4833]: E0217 13:47:35.785583 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:47:36.285568592 +0000 UTC m=+145.920668025 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:35 crc kubenswrapper[4833]: I0217 13:47:35.785550 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-27vkt" podStartSLOduration=124.785534771 podStartE2EDuration="2m4.785534771s" podCreationTimestamp="2026-02-17 13:45:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:47:35.783759568 +0000 UTC m=+145.418859001" watchObservedRunningTime="2026-02-17 13:47:35.785534771 +0000 UTC m=+145.420634204" Feb 17 13:47:35 crc kubenswrapper[4833]: I0217 13:47:35.806352 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-17 13:42:34 +0000 UTC, rotation deadline is 2026-12-27 04:59:42.880935139 +0000 UTC Feb 17 13:47:35 crc kubenswrapper[4833]: I0217 13:47:35.806412 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7503h12m7.074525697s for next certificate rotation Feb 17 13:47:35 crc kubenswrapper[4833]: I0217 13:47:35.829591 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-fps2n" podStartSLOduration=125.82957438 podStartE2EDuration="2m5.82957438s" podCreationTimestamp="2026-02-17 13:45:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:47:35.828677724 +0000 UTC m=+145.463777157" watchObservedRunningTime="2026-02-17 13:47:35.82957438 +0000 UTC m=+145.464673813" Feb 17 13:47:35 crc kubenswrapper[4833]: I0217 13:47:35.867762 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-k497l" podStartSLOduration=124.867743895 podStartE2EDuration="2m4.867743895s" podCreationTimestamp="2026-02-17 13:45:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:47:35.86621396 +0000 UTC m=+145.501313393" watchObservedRunningTime="2026-02-17 13:47:35.867743895 +0000 UTC m=+145.502843328" Feb 17 13:47:35 crc kubenswrapper[4833]: I0217 13:47:35.886016 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2ddlx\" (UID: \"16569a9d-7677-455d-87c6-7b2fb504b731\") " pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" Feb 17 13:47:35 crc kubenswrapper[4833]: E0217 13:47:35.890713 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:47:36.390696607 +0000 UTC m=+146.025796040 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2ddlx" (UID: "16569a9d-7677-455d-87c6-7b2fb504b731") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:35 crc kubenswrapper[4833]: I0217 13:47:35.898169 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xdsld" Feb 17 13:47:35 crc kubenswrapper[4833]: I0217 13:47:35.900101 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7r5vw" podStartSLOduration=124.900089487 podStartE2EDuration="2m4.900089487s" podCreationTimestamp="2026-02-17 13:45:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:47:35.898616693 +0000 UTC m=+145.533716126" watchObservedRunningTime="2026-02-17 13:47:35.900089487 +0000 UTC m=+145.535188930" Feb 17 13:47:35 crc kubenswrapper[4833]: I0217 13:47:35.991578 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:47:35 crc kubenswrapper[4833]: E0217 13:47:35.991912 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:47:36.491884175 +0000 UTC m=+146.126983598 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:35 crc kubenswrapper[4833]: I0217 13:47:35.991924 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-sgpfh" podStartSLOduration=124.991907396 podStartE2EDuration="2m4.991907396s" podCreationTimestamp="2026-02-17 13:45:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:47:35.955657808 +0000 UTC m=+145.590757271" watchObservedRunningTime="2026-02-17 13:47:35.991907396 +0000 UTC m=+145.627006829" Feb 17 13:47:35 crc kubenswrapper[4833]: I0217 13:47:35.992161 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2ddlx\" (UID: \"16569a9d-7677-455d-87c6-7b2fb504b731\") " pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" Feb 17 13:47:35 crc kubenswrapper[4833]: E0217 13:47:35.992540 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:47:36.492529735 +0000 UTC m=+146.127629168 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2ddlx" (UID: "16569a9d-7677-455d-87c6-7b2fb504b731") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:36 crc kubenswrapper[4833]: I0217 13:47:36.038447 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pc9vj" podStartSLOduration=125.038426809 podStartE2EDuration="2m5.038426809s" podCreationTimestamp="2026-02-17 13:45:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:47:36.037914414 +0000 UTC m=+145.673013847" watchObservedRunningTime="2026-02-17 13:47:36.038426809 +0000 UTC m=+145.673526242" Feb 17 13:47:36 crc kubenswrapper[4833]: I0217 13:47:36.095531 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:47:36 crc kubenswrapper[4833]: E0217 13:47:36.095733 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:47:36.595712062 +0000 UTC m=+146.230811495 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:36 crc kubenswrapper[4833]: I0217 13:47:36.095897 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2ddlx\" (UID: \"16569a9d-7677-455d-87c6-7b2fb504b731\") " pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" Feb 17 13:47:36 crc kubenswrapper[4833]: E0217 13:47:36.096154 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:47:36.596147435 +0000 UTC m=+146.231246868 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2ddlx" (UID: "16569a9d-7677-455d-87c6-7b2fb504b731") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:36 crc kubenswrapper[4833]: I0217 13:47:36.115993 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2n6pb" podStartSLOduration=126.115976254 podStartE2EDuration="2m6.115976254s" podCreationTimestamp="2026-02-17 13:45:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:47:36.079117659 +0000 UTC m=+145.714217082" watchObservedRunningTime="2026-02-17 13:47:36.115976254 +0000 UTC m=+145.751075687" Feb 17 13:47:36 crc kubenswrapper[4833]: I0217 13:47:36.196477 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:47:36 crc kubenswrapper[4833]: E0217 13:47:36.196924 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:47:36.6969053 +0000 UTC m=+146.332004733 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:36 crc kubenswrapper[4833]: I0217 13:47:36.201250 4833 patch_prober.go:28] interesting pod/router-default-5444994796-dmmfm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 13:47:36 crc kubenswrapper[4833]: [-]has-synced failed: reason withheld Feb 17 13:47:36 crc kubenswrapper[4833]: [+]process-running ok Feb 17 13:47:36 crc kubenswrapper[4833]: healthz check failed Feb 17 13:47:36 crc kubenswrapper[4833]: I0217 13:47:36.201304 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dmmfm" podUID="9d9814af-c66a-49fd-a3ca-814e8b0caf48" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 13:47:36 crc kubenswrapper[4833]: I0217 13:47:36.298910 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2ddlx\" (UID: \"16569a9d-7677-455d-87c6-7b2fb504b731\") " pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" Feb 17 13:47:36 crc kubenswrapper[4833]: E0217 13:47:36.299554 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:47:36.799534201 +0000 UTC m=+146.434633704 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2ddlx" (UID: "16569a9d-7677-455d-87c6-7b2fb504b731") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:36 crc kubenswrapper[4833]: I0217 13:47:36.400127 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:47:36 crc kubenswrapper[4833]: E0217 13:47:36.400321 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:47:36.900293336 +0000 UTC m=+146.535392769 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:36 crc kubenswrapper[4833]: I0217 13:47:36.400409 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2ddlx\" (UID: \"16569a9d-7677-455d-87c6-7b2fb504b731\") " pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" Feb 17 13:47:36 crc kubenswrapper[4833]: E0217 13:47:36.401017 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:47:36.901004957 +0000 UTC m=+146.536104390 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2ddlx" (UID: "16569a9d-7677-455d-87c6-7b2fb504b731") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:36 crc kubenswrapper[4833]: I0217 13:47:36.500920 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:47:36 crc kubenswrapper[4833]: E0217 13:47:36.501122 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:47:37.001098043 +0000 UTC m=+146.636197476 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:36 crc kubenswrapper[4833]: I0217 13:47:36.501306 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2ddlx\" (UID: \"16569a9d-7677-455d-87c6-7b2fb504b731\") " pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" Feb 17 13:47:36 crc kubenswrapper[4833]: E0217 13:47:36.501599 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:47:37.001588117 +0000 UTC m=+146.636687550 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2ddlx" (UID: "16569a9d-7677-455d-87c6-7b2fb504b731") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:36 crc kubenswrapper[4833]: I0217 13:47:36.601897 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:47:36 crc kubenswrapper[4833]: E0217 13:47:36.602102 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:47:37.102073235 +0000 UTC m=+146.737172718 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:36 crc kubenswrapper[4833]: I0217 13:47:36.602148 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2ddlx\" (UID: \"16569a9d-7677-455d-87c6-7b2fb504b731\") " pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" Feb 17 13:47:36 crc kubenswrapper[4833]: E0217 13:47:36.602417 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:47:37.102403284 +0000 UTC m=+146.737502717 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2ddlx" (UID: "16569a9d-7677-455d-87c6-7b2fb504b731") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:36 crc kubenswrapper[4833]: I0217 13:47:36.702991 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:47:36 crc kubenswrapper[4833]: E0217 13:47:36.703610 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:47:37.203108298 +0000 UTC m=+146.838207731 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:36 crc kubenswrapper[4833]: I0217 13:47:36.703981 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2ddlx\" (UID: \"16569a9d-7677-455d-87c6-7b2fb504b731\") " pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" Feb 17 13:47:36 crc kubenswrapper[4833]: E0217 13:47:36.704373 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:47:37.204356125 +0000 UTC m=+146.839455558 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2ddlx" (UID: "16569a9d-7677-455d-87c6-7b2fb504b731") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:36 crc kubenswrapper[4833]: I0217 13:47:36.804760 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:47:36 crc kubenswrapper[4833]: E0217 13:47:36.804962 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:47:37.304937135 +0000 UTC m=+146.940036568 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:36 crc kubenswrapper[4833]: I0217 13:47:36.805083 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2ddlx\" (UID: \"16569a9d-7677-455d-87c6-7b2fb504b731\") " pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" Feb 17 13:47:36 crc kubenswrapper[4833]: E0217 13:47:36.805371 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:47:37.305359558 +0000 UTC m=+146.940458991 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2ddlx" (UID: "16569a9d-7677-455d-87c6-7b2fb504b731") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:36 crc kubenswrapper[4833]: I0217 13:47:36.832672 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-knm6z" event={"ID":"7a0991bf-b28c-471a-8c28-24b461784fdd","Type":"ContainerStarted","Data":"922fe5e97d3e4e840fa0e7200e3e15bc60529ef0009ea2266008c78c410706a8"} Feb 17 13:47:36 crc kubenswrapper[4833]: I0217 13:47:36.833624 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-k497l" event={"ID":"61861be4-c520-4f21-9a1b-4ad3bb11ba3c","Type":"ContainerStarted","Data":"2e4db3666547799f2cd811b24c1c7d6f23228bd075f79c36ee22141bbbd01582"} Feb 17 13:47:36 crc kubenswrapper[4833]: I0217 13:47:36.835621 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l9kx4" event={"ID":"c251713f-cec1-4ae0-a70b-553ab3b74a5b","Type":"ContainerStarted","Data":"05e0403691dde962481d594acc01f3b981d3ac2cbd3d3b4837389d81bdcf4d58"} Feb 17 13:47:36 crc kubenswrapper[4833]: I0217 13:47:36.835747 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l9kx4" Feb 17 13:47:36 crc kubenswrapper[4833]: I0217 13:47:36.837236 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-2mlww" event={"ID":"f5b88dcb-84d4-4311-8c01-860d17b444eb","Type":"ContainerStarted","Data":"2b023a48b960e391c33d2188f66be361b3a6c3f0e3669653078fe24d1162857f"} Feb 17 13:47:36 crc kubenswrapper[4833]: I0217 13:47:36.838432 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-chdnk" event={"ID":"00b43a88-457b-4c5a-ab44-2af8e47b2c2d","Type":"ContainerStarted","Data":"0e226ba7ce07963d5026529e4c3731413e927901e080a180e45f9a5c02732453"} Feb 17 13:47:36 crc kubenswrapper[4833]: I0217 13:47:36.839767 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-57krf" event={"ID":"80d59877-4f3e-4680-9348-3109670fd514","Type":"ContainerStarted","Data":"e85fa4e4362b4a059f0ed7dc82201ebbf1489cc9a7a9c91657ab5f7095a01675"} Feb 17 13:47:36 crc kubenswrapper[4833]: I0217 13:47:36.841808 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-b8pbt" event={"ID":"d5560b0e-f2b1-469b-b989-f2abce8e9b8b","Type":"ContainerStarted","Data":"38e771dfb527d6d2a3546e4872cfed76ab8c98c37123bd37732128e7c4270ebc"} Feb 17 13:47:36 crc kubenswrapper[4833]: I0217 13:47:36.841864 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-b8pbt" event={"ID":"d5560b0e-f2b1-469b-b989-f2abce8e9b8b","Type":"ContainerStarted","Data":"1e837ca47e21f58645b45b56eab28263b561d8770461791ec89206db6567738f"} Feb 17 13:47:36 crc kubenswrapper[4833]: I0217 13:47:36.843810 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5d2h9" event={"ID":"6ff7bf32-ea2f-46af-89c7-7467a9e48f06","Type":"ContainerStarted","Data":"410cff3fdd79b952f06e5ad93c5cc936d07cfc7fb8312e4c8ffb01de67611ee0"} Feb 17 13:47:36 crc kubenswrapper[4833]: I0217 13:47:36.846125 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cvhc2" event={"ID":"9bd21e79-5975-4f5f-976e-94a05e2df000","Type":"ContainerStarted","Data":"7d928c6f2eeb26843ad54ce1bae2a1c72e444732126ac8dcff097dc56f75627f"} Feb 17 13:47:36 crc kubenswrapper[4833]: I0217 13:47:36.846690 4833 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-cvhc2 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:5443/healthz\": dial tcp 10.217.0.36:5443: connect: connection refused" start-of-body= Feb 17 13:47:36 crc kubenswrapper[4833]: I0217 13:47:36.846742 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cvhc2" podUID="9bd21e79-5975-4f5f-976e-94a05e2df000" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.36:5443/healthz\": dial tcp 10.217.0.36:5443: connect: connection refused" Feb 17 13:47:36 crc kubenswrapper[4833]: I0217 13:47:36.850145 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xj6dt" event={"ID":"bf443d79-e768-4da2-b385-e6b8072cb88e","Type":"ContainerStarted","Data":"c97af8bf57bff6586e684ba9a517715b27ab7600818b97f7f09d23e05c4e66c8"} Feb 17 13:47:36 crc kubenswrapper[4833]: I0217 13:47:36.851818 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xdxhg" event={"ID":"f1cb67f7-508e-44b5-9fa7-bb8f811812f8","Type":"ContainerStarted","Data":"28ff3c6ef6553807a14085677702b8dc2963bfcccb17a2050def8c5a35fa9d6c"} Feb 17 13:47:36 crc kubenswrapper[4833]: I0217 13:47:36.853260 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-stnj2" event={"ID":"c1e5bc60-f7d5-436d-9298-3099adb6bc0a","Type":"ContainerStarted","Data":"8c044ecef7b87cce06d80bf91e4fd60f8a055ee31670d4820b8a7be8c947eaab"} Feb 17 13:47:36 crc kubenswrapper[4833]: I0217 13:47:36.855874 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-9wgts" event={"ID":"5adc4cc4-935c-4a7b-a68c-442269bc7c5d","Type":"ContainerStarted","Data":"da4eed7617840f2d818ca25cc46ccfd8e3c9852bb8ead388a0f585ab6bbf4de8"} Feb 17 13:47:36 crc kubenswrapper[4833]: I0217 13:47:36.855898 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-9wgts" event={"ID":"5adc4cc4-935c-4a7b-a68c-442269bc7c5d","Type":"ContainerStarted","Data":"e39ef4c010e5d7611e3e123c4e9f07f4770013c0aeb2133a50613758ec0c556f"} Feb 17 13:47:36 crc kubenswrapper[4833]: I0217 13:47:36.855921 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-9wgts" Feb 17 13:47:36 crc kubenswrapper[4833]: I0217 13:47:36.857015 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gvzml" event={"ID":"502603f9-5374-4fa7-8398-1f9d931e370f","Type":"ContainerStarted","Data":"9b6dc45a8ecb3d2aad7e2857afdc31266b8e19196f8c9b29d014b12849bd1b6c"} Feb 17 13:47:36 crc kubenswrapper[4833]: I0217 13:47:36.858540 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-27vkt" event={"ID":"ec012f89-9c82-4c49-9a7e-892979946444","Type":"ContainerStarted","Data":"b2799a57463007074b9cd45e85675556b393219db37de125a55fd1e21389851f"} Feb 17 13:47:36 crc kubenswrapper[4833]: I0217 13:47:36.859886 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6nkfm" event={"ID":"a39a9177-9838-434f-a2e0-c8359ff146fe","Type":"ContainerStarted","Data":"0c2ecd6469f780c39aa056f478860c25ca11ce0b43315c08cc2709be069a2396"} Feb 17 13:47:36 crc kubenswrapper[4833]: I0217 13:47:36.860466 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-6nkfm" Feb 17 13:47:36 crc kubenswrapper[4833]: I0217 13:47:36.861229 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-6nkfm" podStartSLOduration=125.861219378 podStartE2EDuration="2m5.861219378s" podCreationTimestamp="2026-02-17 13:45:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:47:36.117607513 +0000 UTC m=+145.752706946" watchObservedRunningTime="2026-02-17 13:47:36.861219378 +0000 UTC m=+146.496318811" Feb 17 13:47:36 crc kubenswrapper[4833]: I0217 13:47:36.863828 4833 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-6nkfm container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Feb 17 13:47:36 crc kubenswrapper[4833]: I0217 13:47:36.863887 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-6nkfm" podUID="a39a9177-9838-434f-a2e0-c8359ff146fe" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Feb 17 13:47:36 crc kubenswrapper[4833]: I0217 13:47:36.869397 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-87pnl" event={"ID":"6d2e32a4-51dc-4406-834c-58392b4727b2","Type":"ContainerStarted","Data":"314e60f234f986da5e10c08d975b6cb887a0aa336ef14ebe5091e2ccce8dcc11"} Feb 17 13:47:36 crc kubenswrapper[4833]: I0217 13:47:36.869427 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-87pnl" event={"ID":"6d2e32a4-51dc-4406-834c-58392b4727b2","Type":"ContainerStarted","Data":"1e528cabd26c85a697a8142681a4a908dd4f59fdffbc0abc12b7c2cd0046ab11"} Feb 17 13:47:36 crc kubenswrapper[4833]: I0217 13:47:36.871533 4833 patch_prober.go:28] interesting pod/downloads-7954f5f757-fps2n container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Feb 17 13:47:36 crc kubenswrapper[4833]: I0217 13:47:36.871626 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fps2n" podUID="c91cc697-a95c-4c07-8750-878937f50446" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" Feb 17 13:47:36 crc kubenswrapper[4833]: I0217 13:47:36.875859 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7r5vw" Feb 17 13:47:36 crc kubenswrapper[4833]: I0217 13:47:36.905731 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:47:36 crc kubenswrapper[4833]: E0217 13:47:36.905931 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:47:37.405896496 +0000 UTC m=+147.040995919 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:36 crc kubenswrapper[4833]: I0217 13:47:36.907709 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2ddlx\" (UID: \"16569a9d-7677-455d-87c6-7b2fb504b731\") " pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" Feb 17 13:47:36 crc kubenswrapper[4833]: I0217 13:47:36.909584 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-knm6z" podStartSLOduration=125.909570686 podStartE2EDuration="2m5.909570686s" podCreationTimestamp="2026-02-17 13:45:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:47:36.863604589 +0000 UTC m=+146.498704022" watchObservedRunningTime="2026-02-17 13:47:36.909570686 +0000 UTC m=+146.544670119" Feb 17 13:47:36 crc kubenswrapper[4833]: I0217 13:47:36.924613 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l9kx4" podStartSLOduration=126.914587285 podStartE2EDuration="2m6.914587285s" podCreationTimestamp="2026-02-17 13:45:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:47:36.908485823 +0000 UTC m=+146.543585256" watchObservedRunningTime="2026-02-17 13:47:36.914587285 +0000 UTC m=+146.549686718" Feb 17 13:47:36 crc kubenswrapper[4833]: E0217 13:47:36.932092 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:47:37.432003993 +0000 UTC m=+147.067103426 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2ddlx" (UID: "16569a9d-7677-455d-87c6-7b2fb504b731") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:36 crc kubenswrapper[4833]: I0217 13:47:36.998587 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-9wgts" podStartSLOduration=7.998571681 podStartE2EDuration="7.998571681s" podCreationTimestamp="2026-02-17 13:47:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:47:36.996785978 +0000 UTC m=+146.631885411" watchObservedRunningTime="2026-02-17 13:47:36.998571681 +0000 UTC m=+146.633671114" Feb 17 13:47:37 crc kubenswrapper[4833]: I0217 13:47:37.010871 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:47:37 crc kubenswrapper[4833]: E0217 13:47:37.011268 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:47:37.511251858 +0000 UTC m=+147.146351291 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:37 crc kubenswrapper[4833]: I0217 13:47:37.014339 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-b8pbt" Feb 17 13:47:37 crc kubenswrapper[4833]: I0217 13:47:37.014639 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-b8pbt" Feb 17 13:47:37 crc kubenswrapper[4833]: I0217 13:47:37.018157 4833 patch_prober.go:28] interesting pod/apiserver-76f77b778f-b8pbt container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.10:8443/livez\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Feb 17 13:47:37 crc kubenswrapper[4833]: I0217 13:47:37.018212 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-b8pbt" podUID="d5560b0e-f2b1-469b-b989-f2abce8e9b8b" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.10:8443/livez\": dial tcp 10.217.0.10:8443: connect: connection refused" Feb 17 13:47:37 crc kubenswrapper[4833]: I0217 13:47:37.030355 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xj6dt" podStartSLOduration=126.030338946 podStartE2EDuration="2m6.030338946s" podCreationTimestamp="2026-02-17 13:45:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:47:37.028054868 +0000 UTC m=+146.663154301" watchObservedRunningTime="2026-02-17 13:47:37.030338946 +0000 UTC m=+146.665438389" Feb 17 13:47:37 crc kubenswrapper[4833]: I0217 13:47:37.073153 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gvzml" podStartSLOduration=126.073134998 podStartE2EDuration="2m6.073134998s" podCreationTimestamp="2026-02-17 13:45:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:47:37.062537183 +0000 UTC m=+146.697636606" watchObservedRunningTime="2026-02-17 13:47:37.073134998 +0000 UTC m=+146.708234431" Feb 17 13:47:37 crc kubenswrapper[4833]: I0217 13:47:37.113629 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2ddlx\" (UID: \"16569a9d-7677-455d-87c6-7b2fb504b731\") " pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" Feb 17 13:47:37 crc kubenswrapper[4833]: E0217 13:47:37.113964 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:47:37.613953191 +0000 UTC m=+147.249052624 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2ddlx" (UID: "16569a9d-7677-455d-87c6-7b2fb504b731") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:37 crc kubenswrapper[4833]: I0217 13:47:37.152504 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-b8pbt" podStartSLOduration=127.152488537 podStartE2EDuration="2m7.152488537s" podCreationTimestamp="2026-02-17 13:45:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:47:37.11694702 +0000 UTC m=+146.752046443" watchObservedRunningTime="2026-02-17 13:47:37.152488537 +0000 UTC m=+146.787587960" Feb 17 13:47:37 crc kubenswrapper[4833]: I0217 13:47:37.173108 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xdxhg" podStartSLOduration=126.173091779 podStartE2EDuration="2m6.173091779s" podCreationTimestamp="2026-02-17 13:45:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:47:37.156808725 +0000 UTC m=+146.791908158" watchObservedRunningTime="2026-02-17 13:47:37.173091779 +0000 UTC m=+146.808191222" Feb 17 13:47:37 crc kubenswrapper[4833]: I0217 13:47:37.197204 4833 patch_prober.go:28] interesting pod/router-default-5444994796-dmmfm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 13:47:37 crc kubenswrapper[4833]: [-]has-synced failed: reason withheld Feb 17 13:47:37 crc kubenswrapper[4833]: [+]process-running ok Feb 17 13:47:37 crc kubenswrapper[4833]: healthz check failed Feb 17 13:47:37 crc kubenswrapper[4833]: I0217 13:47:37.197497 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dmmfm" podUID="9d9814af-c66a-49fd-a3ca-814e8b0caf48" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 13:47:37 crc kubenswrapper[4833]: I0217 13:47:37.209548 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-2mlww" podStartSLOduration=126.209531103 podStartE2EDuration="2m6.209531103s" podCreationTimestamp="2026-02-17 13:45:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:47:37.174202452 +0000 UTC m=+146.809301875" watchObservedRunningTime="2026-02-17 13:47:37.209531103 +0000 UTC m=+146.844630536" Feb 17 13:47:37 crc kubenswrapper[4833]: I0217 13:47:37.215813 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xdxhg" Feb 17 13:47:37 crc kubenswrapper[4833]: I0217 13:47:37.215983 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xdxhg" Feb 17 13:47:37 crc kubenswrapper[4833]: I0217 13:47:37.216515 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:47:37 crc kubenswrapper[4833]: E0217 13:47:37.216931 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:47:37.716915392 +0000 UTC m=+147.352014825 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:37 crc kubenswrapper[4833]: I0217 13:47:37.233835 4833 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-xdxhg container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.6:8443/livez\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Feb 17 13:47:37 crc kubenswrapper[4833]: I0217 13:47:37.233876 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xdxhg" podUID="f1cb67f7-508e-44b5-9fa7-bb8f811812f8" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.6:8443/livez\": dial tcp 10.217.0.6:8443: connect: connection refused" Feb 17 13:47:37 crc kubenswrapper[4833]: I0217 13:47:37.253178 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-stnj2" podStartSLOduration=126.25316132 podStartE2EDuration="2m6.25316132s" podCreationTimestamp="2026-02-17 13:45:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:47:37.209029678 +0000 UTC m=+146.844129111" watchObservedRunningTime="2026-02-17 13:47:37.25316132 +0000 UTC m=+146.888260753" Feb 17 13:47:37 crc kubenswrapper[4833]: I0217 13:47:37.285030 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-chdnk" podStartSLOduration=126.285010136 podStartE2EDuration="2m6.285010136s" podCreationTimestamp="2026-02-17 13:45:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:47:37.253304304 +0000 UTC m=+146.888403737" watchObservedRunningTime="2026-02-17 13:47:37.285010136 +0000 UTC m=+146.920109569" Feb 17 13:47:37 crc kubenswrapper[4833]: I0217 13:47:37.287247 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-87pnl" podStartSLOduration=126.287232412 podStartE2EDuration="2m6.287232412s" podCreationTimestamp="2026-02-17 13:45:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:47:37.284588704 +0000 UTC m=+146.919688127" watchObservedRunningTime="2026-02-17 13:47:37.287232412 +0000 UTC m=+146.922331855" Feb 17 13:47:37 crc kubenswrapper[4833]: I0217 13:47:37.317713 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2ddlx\" (UID: \"16569a9d-7677-455d-87c6-7b2fb504b731\") " pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" Feb 17 13:47:37 crc kubenswrapper[4833]: E0217 13:47:37.318173 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:47:37.818160792 +0000 UTC m=+147.453260225 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2ddlx" (UID: "16569a9d-7677-455d-87c6-7b2fb504b731") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:37 crc kubenswrapper[4833]: I0217 13:47:37.418747 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:47:37 crc kubenswrapper[4833]: E0217 13:47:37.419366 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:47:37.919328089 +0000 UTC m=+147.554427522 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:37 crc kubenswrapper[4833]: I0217 13:47:37.520122 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2ddlx\" (UID: \"16569a9d-7677-455d-87c6-7b2fb504b731\") " pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" Feb 17 13:47:37 crc kubenswrapper[4833]: E0217 13:47:37.520490 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:47:38.020473096 +0000 UTC m=+147.655572529 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2ddlx" (UID: "16569a9d-7677-455d-87c6-7b2fb504b731") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:37 crc kubenswrapper[4833]: I0217 13:47:37.621697 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:47:37 crc kubenswrapper[4833]: E0217 13:47:37.621820 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:47:38.121795758 +0000 UTC m=+147.756895181 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:37 crc kubenswrapper[4833]: I0217 13:47:37.621887 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2ddlx\" (UID: \"16569a9d-7677-455d-87c6-7b2fb504b731\") " pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" Feb 17 13:47:37 crc kubenswrapper[4833]: E0217 13:47:37.622255 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:47:38.122246922 +0000 UTC m=+147.757346355 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2ddlx" (UID: "16569a9d-7677-455d-87c6-7b2fb504b731") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:37 crc kubenswrapper[4833]: I0217 13:47:37.722993 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:47:37 crc kubenswrapper[4833]: E0217 13:47:37.723204 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:47:38.223188492 +0000 UTC m=+147.858287925 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:37 crc kubenswrapper[4833]: I0217 13:47:37.723400 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2ddlx\" (UID: \"16569a9d-7677-455d-87c6-7b2fb504b731\") " pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" Feb 17 13:47:37 crc kubenswrapper[4833]: E0217 13:47:37.723663 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:47:38.223656316 +0000 UTC m=+147.858755749 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2ddlx" (UID: "16569a9d-7677-455d-87c6-7b2fb504b731") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:37 crc kubenswrapper[4833]: I0217 13:47:37.825398 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:47:37 crc kubenswrapper[4833]: E0217 13:47:37.825538 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:47:38.325518024 +0000 UTC m=+147.960617457 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:37 crc kubenswrapper[4833]: I0217 13:47:37.825594 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2ddlx\" (UID: \"16569a9d-7677-455d-87c6-7b2fb504b731\") " pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" Feb 17 13:47:37 crc kubenswrapper[4833]: E0217 13:47:37.825974 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:47:38.325956057 +0000 UTC m=+147.961055490 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2ddlx" (UID: "16569a9d-7677-455d-87c6-7b2fb504b731") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:37 crc kubenswrapper[4833]: I0217 13:47:37.871322 4833 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-h2rpb container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.22:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 13:47:37 crc kubenswrapper[4833]: I0217 13:47:37.871703 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-h2rpb" podUID="234db162-a495-45d1-8af9-7e2deaa2763c" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.22:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 17 13:47:37 crc kubenswrapper[4833]: I0217 13:47:37.881839 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-57krf" event={"ID":"80d59877-4f3e-4680-9348-3109670fd514","Type":"ContainerStarted","Data":"dd4360df265ce1c4e8d9f41b5298f062f205e1830b30dde030d784714ceea4e9"} Feb 17 13:47:37 crc kubenswrapper[4833]: I0217 13:47:37.882588 4833 patch_prober.go:28] interesting pod/downloads-7954f5f757-fps2n container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Feb 17 13:47:37 crc kubenswrapper[4833]: I0217 13:47:37.882639 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fps2n" podUID="c91cc697-a95c-4c07-8750-878937f50446" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" Feb 17 13:47:37 crc kubenswrapper[4833]: I0217 13:47:37.882847 4833 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-6nkfm container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Feb 17 13:47:37 crc kubenswrapper[4833]: I0217 13:47:37.882885 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-6nkfm" podUID="a39a9177-9838-434f-a2e0-c8359ff146fe" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Feb 17 13:47:37 crc kubenswrapper[4833]: I0217 13:47:37.927002 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:47:37 crc kubenswrapper[4833]: E0217 13:47:37.928169 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:47:38.428142505 +0000 UTC m=+148.063241938 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:38 crc kubenswrapper[4833]: I0217 13:47:38.028825 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:47:38 crc kubenswrapper[4833]: I0217 13:47:38.028874 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:47:38 crc kubenswrapper[4833]: I0217 13:47:38.028897 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:47:38 crc kubenswrapper[4833]: I0217 13:47:38.028918 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:47:38 crc kubenswrapper[4833]: I0217 13:47:38.028946 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2ddlx\" (UID: \"16569a9d-7677-455d-87c6-7b2fb504b731\") " pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" Feb 17 13:47:38 crc kubenswrapper[4833]: E0217 13:47:38.029219 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:47:38.529209439 +0000 UTC m=+148.164308872 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2ddlx" (UID: "16569a9d-7677-455d-87c6-7b2fb504b731") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:38 crc kubenswrapper[4833]: I0217 13:47:38.034643 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:47:38 crc kubenswrapper[4833]: I0217 13:47:38.035115 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:47:38 crc kubenswrapper[4833]: I0217 13:47:38.039419 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:47:38 crc kubenswrapper[4833]: I0217 13:47:38.040887 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:47:38 crc kubenswrapper[4833]: I0217 13:47:38.130233 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:47:38 crc kubenswrapper[4833]: E0217 13:47:38.130462 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:47:38.630423258 +0000 UTC m=+148.265522701 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:38 crc kubenswrapper[4833]: I0217 13:47:38.130644 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2ddlx\" (UID: \"16569a9d-7677-455d-87c6-7b2fb504b731\") " pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" Feb 17 13:47:38 crc kubenswrapper[4833]: E0217 13:47:38.130980 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:47:38.630965994 +0000 UTC m=+148.266065427 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2ddlx" (UID: "16569a9d-7677-455d-87c6-7b2fb504b731") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:38 crc kubenswrapper[4833]: I0217 13:47:38.165435 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:47:38 crc kubenswrapper[4833]: I0217 13:47:38.189926 4833 patch_prober.go:28] interesting pod/router-default-5444994796-dmmfm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 13:47:38 crc kubenswrapper[4833]: [-]has-synced failed: reason withheld Feb 17 13:47:38 crc kubenswrapper[4833]: [+]process-running ok Feb 17 13:47:38 crc kubenswrapper[4833]: healthz check failed Feb 17 13:47:38 crc kubenswrapper[4833]: I0217 13:47:38.189982 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dmmfm" podUID="9d9814af-c66a-49fd-a3ca-814e8b0caf48" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 13:47:38 crc kubenswrapper[4833]: I0217 13:47:38.231218 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:47:38 crc kubenswrapper[4833]: E0217 13:47:38.231427 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:47:38.73141287 +0000 UTC m=+148.366512303 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:38 crc kubenswrapper[4833]: I0217 13:47:38.256523 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5q7l2"] Feb 17 13:47:38 crc kubenswrapper[4833]: I0217 13:47:38.257376 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5q7l2" Feb 17 13:47:38 crc kubenswrapper[4833]: I0217 13:47:38.259773 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 17 13:47:38 crc kubenswrapper[4833]: I0217 13:47:38.278544 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:47:38 crc kubenswrapper[4833]: I0217 13:47:38.283418 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:47:38 crc kubenswrapper[4833]: I0217 13:47:38.299525 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5q7l2"] Feb 17 13:47:38 crc kubenswrapper[4833]: I0217 13:47:38.332187 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2ddlx\" (UID: \"16569a9d-7677-455d-87c6-7b2fb504b731\") " pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" Feb 17 13:47:38 crc kubenswrapper[4833]: I0217 13:47:38.332256 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cf157f5-3e18-491b-a285-a25a7e71b2ff-utilities\") pod \"community-operators-5q7l2\" (UID: \"3cf157f5-3e18-491b-a285-a25a7e71b2ff\") " pod="openshift-marketplace/community-operators-5q7l2" Feb 17 13:47:38 crc kubenswrapper[4833]: I0217 13:47:38.332285 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vx2q\" (UniqueName: \"kubernetes.io/projected/3cf157f5-3e18-491b-a285-a25a7e71b2ff-kube-api-access-7vx2q\") pod \"community-operators-5q7l2\" (UID: \"3cf157f5-3e18-491b-a285-a25a7e71b2ff\") " pod="openshift-marketplace/community-operators-5q7l2" Feb 17 13:47:38 crc kubenswrapper[4833]: I0217 13:47:38.332319 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cf157f5-3e18-491b-a285-a25a7e71b2ff-catalog-content\") pod \"community-operators-5q7l2\" (UID: \"3cf157f5-3e18-491b-a285-a25a7e71b2ff\") " pod="openshift-marketplace/community-operators-5q7l2" Feb 17 13:47:38 crc kubenswrapper[4833]: E0217 13:47:38.332662 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:47:38.83264826 +0000 UTC m=+148.467747693 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2ddlx" (UID: "16569a9d-7677-455d-87c6-7b2fb504b731") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:38 crc kubenswrapper[4833]: I0217 13:47:38.433122 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:47:38 crc kubenswrapper[4833]: I0217 13:47:38.433357 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cf157f5-3e18-491b-a285-a25a7e71b2ff-utilities\") pod \"community-operators-5q7l2\" (UID: \"3cf157f5-3e18-491b-a285-a25a7e71b2ff\") " pod="openshift-marketplace/community-operators-5q7l2" Feb 17 13:47:38 crc kubenswrapper[4833]: I0217 13:47:38.433385 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vx2q\" (UniqueName: \"kubernetes.io/projected/3cf157f5-3e18-491b-a285-a25a7e71b2ff-kube-api-access-7vx2q\") pod \"community-operators-5q7l2\" (UID: \"3cf157f5-3e18-491b-a285-a25a7e71b2ff\") " pod="openshift-marketplace/community-operators-5q7l2" Feb 17 13:47:38 crc kubenswrapper[4833]: E0217 13:47:38.433425 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:47:38.933393865 +0000 UTC m=+148.568493298 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:38 crc kubenswrapper[4833]: I0217 13:47:38.433475 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cf157f5-3e18-491b-a285-a25a7e71b2ff-catalog-content\") pod \"community-operators-5q7l2\" (UID: \"3cf157f5-3e18-491b-a285-a25a7e71b2ff\") " pod="openshift-marketplace/community-operators-5q7l2" Feb 17 13:47:38 crc kubenswrapper[4833]: I0217 13:47:38.434343 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cf157f5-3e18-491b-a285-a25a7e71b2ff-catalog-content\") pod \"community-operators-5q7l2\" (UID: \"3cf157f5-3e18-491b-a285-a25a7e71b2ff\") " pod="openshift-marketplace/community-operators-5q7l2" Feb 17 13:47:38 crc kubenswrapper[4833]: I0217 13:47:38.434416 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cf157f5-3e18-491b-a285-a25a7e71b2ff-utilities\") pod \"community-operators-5q7l2\" (UID: \"3cf157f5-3e18-491b-a285-a25a7e71b2ff\") " pod="openshift-marketplace/community-operators-5q7l2" Feb 17 13:47:38 crc kubenswrapper[4833]: I0217 13:47:38.456685 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vnkbw"] Feb 17 13:47:38 crc kubenswrapper[4833]: I0217 13:47:38.458123 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vnkbw" Feb 17 13:47:38 crc kubenswrapper[4833]: I0217 13:47:38.461560 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 17 13:47:38 crc kubenswrapper[4833]: I0217 13:47:38.467505 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vx2q\" (UniqueName: \"kubernetes.io/projected/3cf157f5-3e18-491b-a285-a25a7e71b2ff-kube-api-access-7vx2q\") pod \"community-operators-5q7l2\" (UID: \"3cf157f5-3e18-491b-a285-a25a7e71b2ff\") " pod="openshift-marketplace/community-operators-5q7l2" Feb 17 13:47:38 crc kubenswrapper[4833]: I0217 13:47:38.485819 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vnkbw"] Feb 17 13:47:38 crc kubenswrapper[4833]: I0217 13:47:38.535705 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2ddlx\" (UID: \"16569a9d-7677-455d-87c6-7b2fb504b731\") " pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" Feb 17 13:47:38 crc kubenswrapper[4833]: E0217 13:47:38.536274 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:47:39.036261392 +0000 UTC m=+148.671360815 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2ddlx" (UID: "16569a9d-7677-455d-87c6-7b2fb504b731") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:38 crc kubenswrapper[4833]: I0217 13:47:38.596476 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5q7l2" Feb 17 13:47:38 crc kubenswrapper[4833]: I0217 13:47:38.643759 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:47:38 crc kubenswrapper[4833]: I0217 13:47:38.643848 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7866e11a-9385-4003-9406-d4012097cbb3-utilities\") pod \"certified-operators-vnkbw\" (UID: \"7866e11a-9385-4003-9406-d4012097cbb3\") " pod="openshift-marketplace/certified-operators-vnkbw" Feb 17 13:47:38 crc kubenswrapper[4833]: I0217 13:47:38.643885 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nh9d5\" (UniqueName: \"kubernetes.io/projected/7866e11a-9385-4003-9406-d4012097cbb3-kube-api-access-nh9d5\") pod \"certified-operators-vnkbw\" (UID: \"7866e11a-9385-4003-9406-d4012097cbb3\") " pod="openshift-marketplace/certified-operators-vnkbw" Feb 17 13:47:38 crc kubenswrapper[4833]: I0217 13:47:38.643941 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7866e11a-9385-4003-9406-d4012097cbb3-catalog-content\") pod \"certified-operators-vnkbw\" (UID: \"7866e11a-9385-4003-9406-d4012097cbb3\") " pod="openshift-marketplace/certified-operators-vnkbw" Feb 17 13:47:38 crc kubenswrapper[4833]: E0217 13:47:38.644025 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:47:39.144010905 +0000 UTC m=+148.779110338 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:38 crc kubenswrapper[4833]: I0217 13:47:38.667908 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-g6vpd"] Feb 17 13:47:38 crc kubenswrapper[4833]: I0217 13:47:38.668820 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g6vpd" Feb 17 13:47:38 crc kubenswrapper[4833]: I0217 13:47:38.675811 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g6vpd"] Feb 17 13:47:38 crc kubenswrapper[4833]: I0217 13:47:38.698317 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cvhc2" Feb 17 13:47:38 crc kubenswrapper[4833]: I0217 13:47:38.751996 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2ddlx\" (UID: \"16569a9d-7677-455d-87c6-7b2fb504b731\") " pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" Feb 17 13:47:38 crc kubenswrapper[4833]: I0217 13:47:38.752325 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nh9d5\" (UniqueName: \"kubernetes.io/projected/7866e11a-9385-4003-9406-d4012097cbb3-kube-api-access-nh9d5\") pod \"certified-operators-vnkbw\" (UID: \"7866e11a-9385-4003-9406-d4012097cbb3\") " pod="openshift-marketplace/certified-operators-vnkbw" Feb 17 13:47:38 crc kubenswrapper[4833]: I0217 13:47:38.752353 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73094c1e-9f36-472d-9275-12562b4cd250-catalog-content\") pod \"community-operators-g6vpd\" (UID: \"73094c1e-9f36-472d-9275-12562b4cd250\") " pod="openshift-marketplace/community-operators-g6vpd" Feb 17 13:47:38 crc kubenswrapper[4833]: I0217 13:47:38.752391 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmqfk\" (UniqueName: \"kubernetes.io/projected/73094c1e-9f36-472d-9275-12562b4cd250-kube-api-access-wmqfk\") pod \"community-operators-g6vpd\" (UID: \"73094c1e-9f36-472d-9275-12562b4cd250\") " pod="openshift-marketplace/community-operators-g6vpd" Feb 17 13:47:38 crc kubenswrapper[4833]: I0217 13:47:38.752426 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7866e11a-9385-4003-9406-d4012097cbb3-catalog-content\") pod \"certified-operators-vnkbw\" (UID: \"7866e11a-9385-4003-9406-d4012097cbb3\") " pod="openshift-marketplace/certified-operators-vnkbw" Feb 17 13:47:38 crc kubenswrapper[4833]: E0217 13:47:38.752713 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:47:39.252696436 +0000 UTC m=+148.887795869 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2ddlx" (UID: "16569a9d-7677-455d-87c6-7b2fb504b731") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:38 crc kubenswrapper[4833]: I0217 13:47:38.752761 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7866e11a-9385-4003-9406-d4012097cbb3-utilities\") pod \"certified-operators-vnkbw\" (UID: \"7866e11a-9385-4003-9406-d4012097cbb3\") " pod="openshift-marketplace/certified-operators-vnkbw" Feb 17 13:47:38 crc kubenswrapper[4833]: I0217 13:47:38.752798 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73094c1e-9f36-472d-9275-12562b4cd250-utilities\") pod \"community-operators-g6vpd\" (UID: \"73094c1e-9f36-472d-9275-12562b4cd250\") " pod="openshift-marketplace/community-operators-g6vpd" Feb 17 13:47:38 crc kubenswrapper[4833]: I0217 13:47:38.753513 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7866e11a-9385-4003-9406-d4012097cbb3-catalog-content\") pod \"certified-operators-vnkbw\" (UID: \"7866e11a-9385-4003-9406-d4012097cbb3\") " pod="openshift-marketplace/certified-operators-vnkbw" Feb 17 13:47:38 crc kubenswrapper[4833]: I0217 13:47:38.753558 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7866e11a-9385-4003-9406-d4012097cbb3-utilities\") pod \"certified-operators-vnkbw\" (UID: \"7866e11a-9385-4003-9406-d4012097cbb3\") " pod="openshift-marketplace/certified-operators-vnkbw" Feb 17 13:47:38 crc kubenswrapper[4833]: I0217 13:47:38.778784 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nh9d5\" (UniqueName: \"kubernetes.io/projected/7866e11a-9385-4003-9406-d4012097cbb3-kube-api-access-nh9d5\") pod \"certified-operators-vnkbw\" (UID: \"7866e11a-9385-4003-9406-d4012097cbb3\") " pod="openshift-marketplace/certified-operators-vnkbw" Feb 17 13:47:38 crc kubenswrapper[4833]: I0217 13:47:38.835319 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vnkbw" Feb 17 13:47:38 crc kubenswrapper[4833]: I0217 13:47:38.861578 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wrrv5"] Feb 17 13:47:38 crc kubenswrapper[4833]: I0217 13:47:38.861979 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:47:38 crc kubenswrapper[4833]: I0217 13:47:38.862187 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73094c1e-9f36-472d-9275-12562b4cd250-utilities\") pod \"community-operators-g6vpd\" (UID: \"73094c1e-9f36-472d-9275-12562b4cd250\") " pod="openshift-marketplace/community-operators-g6vpd" Feb 17 13:47:38 crc kubenswrapper[4833]: I0217 13:47:38.862230 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73094c1e-9f36-472d-9275-12562b4cd250-catalog-content\") pod \"community-operators-g6vpd\" (UID: \"73094c1e-9f36-472d-9275-12562b4cd250\") " pod="openshift-marketplace/community-operators-g6vpd" Feb 17 13:47:38 crc kubenswrapper[4833]: I0217 13:47:38.862274 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmqfk\" (UniqueName: \"kubernetes.io/projected/73094c1e-9f36-472d-9275-12562b4cd250-kube-api-access-wmqfk\") pod \"community-operators-g6vpd\" (UID: \"73094c1e-9f36-472d-9275-12562b4cd250\") " pod="openshift-marketplace/community-operators-g6vpd" Feb 17 13:47:38 crc kubenswrapper[4833]: I0217 13:47:38.862729 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73094c1e-9f36-472d-9275-12562b4cd250-utilities\") pod \"community-operators-g6vpd\" (UID: \"73094c1e-9f36-472d-9275-12562b4cd250\") " pod="openshift-marketplace/community-operators-g6vpd" Feb 17 13:47:38 crc kubenswrapper[4833]: E0217 13:47:38.862819 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:47:39.362801409 +0000 UTC m=+148.997900842 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:38 crc kubenswrapper[4833]: I0217 13:47:38.862843 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73094c1e-9f36-472d-9275-12562b4cd250-catalog-content\") pod \"community-operators-g6vpd\" (UID: \"73094c1e-9f36-472d-9275-12562b4cd250\") " pod="openshift-marketplace/community-operators-g6vpd" Feb 17 13:47:38 crc kubenswrapper[4833]: I0217 13:47:38.871812 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wrrv5" Feb 17 13:47:38 crc kubenswrapper[4833]: I0217 13:47:38.882284 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wrrv5"] Feb 17 13:47:38 crc kubenswrapper[4833]: I0217 13:47:38.897818 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmqfk\" (UniqueName: \"kubernetes.io/projected/73094c1e-9f36-472d-9275-12562b4cd250-kube-api-access-wmqfk\") pod \"community-operators-g6vpd\" (UID: \"73094c1e-9f36-472d-9275-12562b4cd250\") " pod="openshift-marketplace/community-operators-g6vpd" Feb 17 13:47:38 crc kubenswrapper[4833]: W0217 13:47:38.948227 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-553b3fa4e23e79965f7dbb6254353440444fd880f876c71f37dc411980d64ceb WatchSource:0}: Error finding container 553b3fa4e23e79965f7dbb6254353440444fd880f876c71f37dc411980d64ceb: Status 404 returned error can't find the container with id 553b3fa4e23e79965f7dbb6254353440444fd880f876c71f37dc411980d64ceb Feb 17 13:47:38 crc kubenswrapper[4833]: I0217 13:47:38.957226 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-57krf" event={"ID":"80d59877-4f3e-4680-9348-3109670fd514","Type":"ContainerStarted","Data":"0298d8cc8f5c29d6f9bf6d211722dd24a19313eb1d3cbbe03f1a5be31c01094f"} Feb 17 13:47:38 crc kubenswrapper[4833]: I0217 13:47:38.957261 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-57krf" event={"ID":"80d59877-4f3e-4680-9348-3109670fd514","Type":"ContainerStarted","Data":"40d167f7cc305d3df9beefccd5a23ec49451205f42272bacb69c66df4737ac19"} Feb 17 13:47:38 crc kubenswrapper[4833]: I0217 13:47:38.964607 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2ddlx\" (UID: \"16569a9d-7677-455d-87c6-7b2fb504b731\") " pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" Feb 17 13:47:38 crc kubenswrapper[4833]: E0217 13:47:38.966215 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:47:39.466200023 +0000 UTC m=+149.101299456 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2ddlx" (UID: "16569a9d-7677-455d-87c6-7b2fb504b731") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:38 crc kubenswrapper[4833]: I0217 13:47:38.996151 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-57krf" podStartSLOduration=9.996136192 podStartE2EDuration="9.996136192s" podCreationTimestamp="2026-02-17 13:47:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:47:38.992440303 +0000 UTC m=+148.627539736" watchObservedRunningTime="2026-02-17 13:47:38.996136192 +0000 UTC m=+148.631235625" Feb 17 13:47:39 crc kubenswrapper[4833]: I0217 13:47:39.001770 4833 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 17 13:47:39 crc kubenswrapper[4833]: I0217 13:47:39.017275 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g6vpd" Feb 17 13:47:39 crc kubenswrapper[4833]: I0217 13:47:39.065494 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:47:39 crc kubenswrapper[4833]: E0217 13:47:39.065661 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:47:39.565645959 +0000 UTC m=+149.200745392 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:39 crc kubenswrapper[4833]: I0217 13:47:39.065815 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96680c84-52a5-4ddc-a676-7a5c71c9f3f6-utilities\") pod \"certified-operators-wrrv5\" (UID: \"96680c84-52a5-4ddc-a676-7a5c71c9f3f6\") " pod="openshift-marketplace/certified-operators-wrrv5" Feb 17 13:47:39 crc kubenswrapper[4833]: I0217 13:47:39.065908 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96680c84-52a5-4ddc-a676-7a5c71c9f3f6-catalog-content\") pod \"certified-operators-wrrv5\" (UID: \"96680c84-52a5-4ddc-a676-7a5c71c9f3f6\") " pod="openshift-marketplace/certified-operators-wrrv5" Feb 17 13:47:39 crc kubenswrapper[4833]: I0217 13:47:39.065982 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69d5t\" (UniqueName: \"kubernetes.io/projected/96680c84-52a5-4ddc-a676-7a5c71c9f3f6-kube-api-access-69d5t\") pod \"certified-operators-wrrv5\" (UID: \"96680c84-52a5-4ddc-a676-7a5c71c9f3f6\") " pod="openshift-marketplace/certified-operators-wrrv5" Feb 17 13:47:39 crc kubenswrapper[4833]: I0217 13:47:39.066062 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2ddlx\" (UID: \"16569a9d-7677-455d-87c6-7b2fb504b731\") " pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" Feb 17 13:47:39 crc kubenswrapper[4833]: E0217 13:47:39.066332 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:47:39.566319699 +0000 UTC m=+149.201419132 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2ddlx" (UID: "16569a9d-7677-455d-87c6-7b2fb504b731") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:39 crc kubenswrapper[4833]: I0217 13:47:39.148478 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l9kx4" Feb 17 13:47:39 crc kubenswrapper[4833]: I0217 13:47:39.171872 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:47:39 crc kubenswrapper[4833]: I0217 13:47:39.172067 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96680c84-52a5-4ddc-a676-7a5c71c9f3f6-utilities\") pod \"certified-operators-wrrv5\" (UID: \"96680c84-52a5-4ddc-a676-7a5c71c9f3f6\") " pod="openshift-marketplace/certified-operators-wrrv5" Feb 17 13:47:39 crc kubenswrapper[4833]: I0217 13:47:39.172123 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96680c84-52a5-4ddc-a676-7a5c71c9f3f6-catalog-content\") pod \"certified-operators-wrrv5\" (UID: \"96680c84-52a5-4ddc-a676-7a5c71c9f3f6\") " pod="openshift-marketplace/certified-operators-wrrv5" Feb 17 13:47:39 crc kubenswrapper[4833]: I0217 13:47:39.172161 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69d5t\" (UniqueName: \"kubernetes.io/projected/96680c84-52a5-4ddc-a676-7a5c71c9f3f6-kube-api-access-69d5t\") pod \"certified-operators-wrrv5\" (UID: \"96680c84-52a5-4ddc-a676-7a5c71c9f3f6\") " pod="openshift-marketplace/certified-operators-wrrv5" Feb 17 13:47:39 crc kubenswrapper[4833]: I0217 13:47:39.172769 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96680c84-52a5-4ddc-a676-7a5c71c9f3f6-utilities\") pod \"certified-operators-wrrv5\" (UID: \"96680c84-52a5-4ddc-a676-7a5c71c9f3f6\") " pod="openshift-marketplace/certified-operators-wrrv5" Feb 17 13:47:39 crc kubenswrapper[4833]: E0217 13:47:39.172873 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:47:39.672852576 +0000 UTC m=+149.307952059 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:47:39 crc kubenswrapper[4833]: I0217 13:47:39.181241 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96680c84-52a5-4ddc-a676-7a5c71c9f3f6-catalog-content\") pod \"certified-operators-wrrv5\" (UID: \"96680c84-52a5-4ddc-a676-7a5c71c9f3f6\") " pod="openshift-marketplace/certified-operators-wrrv5" Feb 17 13:47:39 crc kubenswrapper[4833]: I0217 13:47:39.203161 4833 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-17T13:47:39.001798471Z","Handler":null,"Name":""} Feb 17 13:47:39 crc kubenswrapper[4833]: I0217 13:47:39.259942 4833 patch_prober.go:28] interesting pod/router-default-5444994796-dmmfm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 13:47:39 crc kubenswrapper[4833]: [-]has-synced failed: reason withheld Feb 17 13:47:39 crc kubenswrapper[4833]: [+]process-running ok Feb 17 13:47:39 crc kubenswrapper[4833]: healthz check failed Feb 17 13:47:39 crc kubenswrapper[4833]: I0217 13:47:39.260003 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dmmfm" podUID="9d9814af-c66a-49fd-a3ca-814e8b0caf48" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 13:47:39 crc kubenswrapper[4833]: I0217 13:47:39.261983 4833 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 17 13:47:39 crc kubenswrapper[4833]: I0217 13:47:39.262106 4833 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 17 13:47:39 crc kubenswrapper[4833]: I0217 13:47:39.271995 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69d5t\" (UniqueName: \"kubernetes.io/projected/96680c84-52a5-4ddc-a676-7a5c71c9f3f6-kube-api-access-69d5t\") pod \"certified-operators-wrrv5\" (UID: \"96680c84-52a5-4ddc-a676-7a5c71c9f3f6\") " pod="openshift-marketplace/certified-operators-wrrv5" Feb 17 13:47:39 crc kubenswrapper[4833]: I0217 13:47:39.275269 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2ddlx\" (UID: \"16569a9d-7677-455d-87c6-7b2fb504b731\") " pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" Feb 17 13:47:39 crc kubenswrapper[4833]: I0217 13:47:39.289163 4833 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 13:47:39 crc kubenswrapper[4833]: I0217 13:47:39.289199 4833 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2ddlx\" (UID: \"16569a9d-7677-455d-87c6-7b2fb504b731\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" Feb 17 13:47:39 crc kubenswrapper[4833]: I0217 13:47:39.290141 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wrrv5" Feb 17 13:47:39 crc kubenswrapper[4833]: I0217 13:47:39.436452 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2ddlx\" (UID: \"16569a9d-7677-455d-87c6-7b2fb504b731\") " pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" Feb 17 13:47:39 crc kubenswrapper[4833]: I0217 13:47:39.449210 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5q7l2"] Feb 17 13:47:39 crc kubenswrapper[4833]: I0217 13:47:39.481726 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:47:39 crc kubenswrapper[4833]: I0217 13:47:39.494249 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" Feb 17 13:47:39 crc kubenswrapper[4833]: I0217 13:47:39.564814 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 13:47:39 crc kubenswrapper[4833]: I0217 13:47:39.689533 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g6vpd"] Feb 17 13:47:39 crc kubenswrapper[4833]: W0217 13:47:39.699640 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73094c1e_9f36_472d_9275_12562b4cd250.slice/crio-10f710eca45f0e63ea5e364b5862eac8863409952474ae16202912682cce393e WatchSource:0}: Error finding container 10f710eca45f0e63ea5e364b5862eac8863409952474ae16202912682cce393e: Status 404 returned error can't find the container with id 10f710eca45f0e63ea5e364b5862eac8863409952474ae16202912682cce393e Feb 17 13:47:39 crc kubenswrapper[4833]: I0217 13:47:39.717604 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vnkbw"] Feb 17 13:47:39 crc kubenswrapper[4833]: I0217 13:47:39.809887 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wrrv5"] Feb 17 13:47:39 crc kubenswrapper[4833]: W0217 13:47:39.817693 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7866e11a_9385_4003_9406_d4012097cbb3.slice/crio-c72ce71bf36915bd0a90bb51ccf819702c22194f125545526611a8671c23093e WatchSource:0}: Error finding container c72ce71bf36915bd0a90bb51ccf819702c22194f125545526611a8671c23093e: Status 404 returned error can't find the container with id c72ce71bf36915bd0a90bb51ccf819702c22194f125545526611a8671c23093e Feb 17 13:47:39 crc kubenswrapper[4833]: W0217 13:47:39.825009 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96680c84_52a5_4ddc_a676_7a5c71c9f3f6.slice/crio-5e8399f605bb6a9d9c9b851450ff36be3a87a9e9b89d8058637c48104c84dc95 WatchSource:0}: Error finding container 5e8399f605bb6a9d9c9b851450ff36be3a87a9e9b89d8058637c48104c84dc95: Status 404 returned error can't find the container with id 5e8399f605bb6a9d9c9b851450ff36be3a87a9e9b89d8058637c48104c84dc95 Feb 17 13:47:39 crc kubenswrapper[4833]: I0217 13:47:39.854532 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-2ddlx"] Feb 17 13:47:39 crc kubenswrapper[4833]: W0217 13:47:39.864358 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16569a9d_7677_455d_87c6_7b2fb504b731.slice/crio-0a902cc5b17aa83f614e84c5fce6a5150af843f83545ba847b41e75818d38ffe WatchSource:0}: Error finding container 0a902cc5b17aa83f614e84c5fce6a5150af843f83545ba847b41e75818d38ffe: Status 404 returned error can't find the container with id 0a902cc5b17aa83f614e84c5fce6a5150af843f83545ba847b41e75818d38ffe Feb 17 13:47:39 crc kubenswrapper[4833]: I0217 13:47:39.969029 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"eee695663bb5936c4b1ea07a576154ff59455bf0ba672b8b026952bc8a8f73c3"} Feb 17 13:47:39 crc kubenswrapper[4833]: I0217 13:47:39.969097 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"f4fd98d9dcda3f7837f4505ead6d34b9b429489bde6bb9d41ca79b653d3528cd"} Feb 17 13:47:39 crc kubenswrapper[4833]: I0217 13:47:39.974204 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"dae2d57dbc9f56a0347c778d8edfe64f17e8106f1cfca02bc3856f301b7bbda6"} Feb 17 13:47:39 crc kubenswrapper[4833]: I0217 13:47:39.974309 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"553b3fa4e23e79965f7dbb6254353440444fd880f876c71f37dc411980d64ceb"} Feb 17 13:47:39 crc kubenswrapper[4833]: I0217 13:47:39.975748 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" event={"ID":"16569a9d-7677-455d-87c6-7b2fb504b731","Type":"ContainerStarted","Data":"0a902cc5b17aa83f614e84c5fce6a5150af843f83545ba847b41e75818d38ffe"} Feb 17 13:47:39 crc kubenswrapper[4833]: I0217 13:47:39.977706 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g6vpd" event={"ID":"73094c1e-9f36-472d-9275-12562b4cd250","Type":"ContainerStarted","Data":"10f710eca45f0e63ea5e364b5862eac8863409952474ae16202912682cce393e"} Feb 17 13:47:39 crc kubenswrapper[4833]: I0217 13:47:39.978866 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wrrv5" event={"ID":"96680c84-52a5-4ddc-a676-7a5c71c9f3f6","Type":"ContainerStarted","Data":"5e8399f605bb6a9d9c9b851450ff36be3a87a9e9b89d8058637c48104c84dc95"} Feb 17 13:47:39 crc kubenswrapper[4833]: I0217 13:47:39.979564 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vnkbw" event={"ID":"7866e11a-9385-4003-9406-d4012097cbb3","Type":"ContainerStarted","Data":"c72ce71bf36915bd0a90bb51ccf819702c22194f125545526611a8671c23093e"} Feb 17 13:47:39 crc kubenswrapper[4833]: I0217 13:47:39.981108 4833 generic.go:334] "Generic (PLEG): container finished" podID="3cf157f5-3e18-491b-a285-a25a7e71b2ff" containerID="9c13c4d8d407f34642fc01677779109f6f22525cbd5eeb1deecbc4a7c537ba46" exitCode=0 Feb 17 13:47:39 crc kubenswrapper[4833]: I0217 13:47:39.981140 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5q7l2" event={"ID":"3cf157f5-3e18-491b-a285-a25a7e71b2ff","Type":"ContainerDied","Data":"9c13c4d8d407f34642fc01677779109f6f22525cbd5eeb1deecbc4a7c537ba46"} Feb 17 13:47:39 crc kubenswrapper[4833]: I0217 13:47:39.981166 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5q7l2" event={"ID":"3cf157f5-3e18-491b-a285-a25a7e71b2ff","Type":"ContainerStarted","Data":"ae212206a11cce48998e62a98cecc1a46f9b4521c922547883690d93d62ec5e9"} Feb 17 13:47:39 crc kubenswrapper[4833]: I0217 13:47:39.982504 4833 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 13:47:39 crc kubenswrapper[4833]: I0217 13:47:39.985614 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"6539c5f3251d5de4afe1108a21d2b3440bb9bfb8ccff3c63eade1d2c4821f42d"} Feb 17 13:47:39 crc kubenswrapper[4833]: I0217 13:47:39.985644 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"0d440a9d58d998fc7d93c6d5d45f23248eb7c51ced488f22154ad0623c333ec4"} Feb 17 13:47:39 crc kubenswrapper[4833]: I0217 13:47:39.985993 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:47:40 crc kubenswrapper[4833]: I0217 13:47:40.189854 4833 patch_prober.go:28] interesting pod/router-default-5444994796-dmmfm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 13:47:40 crc kubenswrapper[4833]: [-]has-synced failed: reason withheld Feb 17 13:47:40 crc kubenswrapper[4833]: [+]process-running ok Feb 17 13:47:40 crc kubenswrapper[4833]: healthz check failed Feb 17 13:47:40 crc kubenswrapper[4833]: I0217 13:47:40.189903 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dmmfm" podUID="9d9814af-c66a-49fd-a3ca-814e8b0caf48" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 13:47:40 crc kubenswrapper[4833]: I0217 13:47:40.249874 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-s22hg"] Feb 17 13:47:40 crc kubenswrapper[4833]: I0217 13:47:40.251448 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s22hg" Feb 17 13:47:40 crc kubenswrapper[4833]: I0217 13:47:40.254468 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 17 13:47:40 crc kubenswrapper[4833]: I0217 13:47:40.259214 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s22hg"] Feb 17 13:47:40 crc kubenswrapper[4833]: I0217 13:47:40.295799 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/811bfe90-4b33-4bfb-969f-63d5dbde1b94-utilities\") pod \"redhat-marketplace-s22hg\" (UID: \"811bfe90-4b33-4bfb-969f-63d5dbde1b94\") " pod="openshift-marketplace/redhat-marketplace-s22hg" Feb 17 13:47:40 crc kubenswrapper[4833]: I0217 13:47:40.295849 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/811bfe90-4b33-4bfb-969f-63d5dbde1b94-catalog-content\") pod \"redhat-marketplace-s22hg\" (UID: \"811bfe90-4b33-4bfb-969f-63d5dbde1b94\") " pod="openshift-marketplace/redhat-marketplace-s22hg" Feb 17 13:47:40 crc kubenswrapper[4833]: I0217 13:47:40.295901 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkhc8\" (UniqueName: \"kubernetes.io/projected/811bfe90-4b33-4bfb-969f-63d5dbde1b94-kube-api-access-vkhc8\") pod \"redhat-marketplace-s22hg\" (UID: \"811bfe90-4b33-4bfb-969f-63d5dbde1b94\") " pod="openshift-marketplace/redhat-marketplace-s22hg" Feb 17 13:47:40 crc kubenswrapper[4833]: I0217 13:47:40.397357 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/811bfe90-4b33-4bfb-969f-63d5dbde1b94-utilities\") pod \"redhat-marketplace-s22hg\" (UID: \"811bfe90-4b33-4bfb-969f-63d5dbde1b94\") " pod="openshift-marketplace/redhat-marketplace-s22hg" Feb 17 13:47:40 crc kubenswrapper[4833]: I0217 13:47:40.397439 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/811bfe90-4b33-4bfb-969f-63d5dbde1b94-catalog-content\") pod \"redhat-marketplace-s22hg\" (UID: \"811bfe90-4b33-4bfb-969f-63d5dbde1b94\") " pod="openshift-marketplace/redhat-marketplace-s22hg" Feb 17 13:47:40 crc kubenswrapper[4833]: I0217 13:47:40.397525 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkhc8\" (UniqueName: \"kubernetes.io/projected/811bfe90-4b33-4bfb-969f-63d5dbde1b94-kube-api-access-vkhc8\") pod \"redhat-marketplace-s22hg\" (UID: \"811bfe90-4b33-4bfb-969f-63d5dbde1b94\") " pod="openshift-marketplace/redhat-marketplace-s22hg" Feb 17 13:47:40 crc kubenswrapper[4833]: I0217 13:47:40.398407 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/811bfe90-4b33-4bfb-969f-63d5dbde1b94-utilities\") pod \"redhat-marketplace-s22hg\" (UID: \"811bfe90-4b33-4bfb-969f-63d5dbde1b94\") " pod="openshift-marketplace/redhat-marketplace-s22hg" Feb 17 13:47:40 crc kubenswrapper[4833]: I0217 13:47:40.398705 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/811bfe90-4b33-4bfb-969f-63d5dbde1b94-catalog-content\") pod \"redhat-marketplace-s22hg\" (UID: \"811bfe90-4b33-4bfb-969f-63d5dbde1b94\") " pod="openshift-marketplace/redhat-marketplace-s22hg" Feb 17 13:47:40 crc kubenswrapper[4833]: I0217 13:47:40.419084 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkhc8\" (UniqueName: \"kubernetes.io/projected/811bfe90-4b33-4bfb-969f-63d5dbde1b94-kube-api-access-vkhc8\") pod \"redhat-marketplace-s22hg\" (UID: \"811bfe90-4b33-4bfb-969f-63d5dbde1b94\") " pod="openshift-marketplace/redhat-marketplace-s22hg" Feb 17 13:47:40 crc kubenswrapper[4833]: I0217 13:47:40.609025 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s22hg" Feb 17 13:47:40 crc kubenswrapper[4833]: I0217 13:47:40.659159 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mdg8m"] Feb 17 13:47:40 crc kubenswrapper[4833]: I0217 13:47:40.660332 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mdg8m" Feb 17 13:47:40 crc kubenswrapper[4833]: I0217 13:47:40.667209 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mdg8m"] Feb 17 13:47:40 crc kubenswrapper[4833]: I0217 13:47:40.701521 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5bjw\" (UniqueName: \"kubernetes.io/projected/0ef940cc-d662-4a1c-aee3-09c28bfac646-kube-api-access-z5bjw\") pod \"redhat-marketplace-mdg8m\" (UID: \"0ef940cc-d662-4a1c-aee3-09c28bfac646\") " pod="openshift-marketplace/redhat-marketplace-mdg8m" Feb 17 13:47:40 crc kubenswrapper[4833]: I0217 13:47:40.701558 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ef940cc-d662-4a1c-aee3-09c28bfac646-utilities\") pod \"redhat-marketplace-mdg8m\" (UID: \"0ef940cc-d662-4a1c-aee3-09c28bfac646\") " pod="openshift-marketplace/redhat-marketplace-mdg8m" Feb 17 13:47:40 crc kubenswrapper[4833]: I0217 13:47:40.701577 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ef940cc-d662-4a1c-aee3-09c28bfac646-catalog-content\") pod \"redhat-marketplace-mdg8m\" (UID: \"0ef940cc-d662-4a1c-aee3-09c28bfac646\") " pod="openshift-marketplace/redhat-marketplace-mdg8m" Feb 17 13:47:40 crc kubenswrapper[4833]: I0217 13:47:40.806087 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5bjw\" (UniqueName: \"kubernetes.io/projected/0ef940cc-d662-4a1c-aee3-09c28bfac646-kube-api-access-z5bjw\") pod \"redhat-marketplace-mdg8m\" (UID: \"0ef940cc-d662-4a1c-aee3-09c28bfac646\") " pod="openshift-marketplace/redhat-marketplace-mdg8m" Feb 17 13:47:40 crc kubenswrapper[4833]: I0217 13:47:40.806132 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ef940cc-d662-4a1c-aee3-09c28bfac646-utilities\") pod \"redhat-marketplace-mdg8m\" (UID: \"0ef940cc-d662-4a1c-aee3-09c28bfac646\") " pod="openshift-marketplace/redhat-marketplace-mdg8m" Feb 17 13:47:40 crc kubenswrapper[4833]: I0217 13:47:40.806150 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ef940cc-d662-4a1c-aee3-09c28bfac646-catalog-content\") pod \"redhat-marketplace-mdg8m\" (UID: \"0ef940cc-d662-4a1c-aee3-09c28bfac646\") " pod="openshift-marketplace/redhat-marketplace-mdg8m" Feb 17 13:47:40 crc kubenswrapper[4833]: I0217 13:47:40.806859 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ef940cc-d662-4a1c-aee3-09c28bfac646-catalog-content\") pod \"redhat-marketplace-mdg8m\" (UID: \"0ef940cc-d662-4a1c-aee3-09c28bfac646\") " pod="openshift-marketplace/redhat-marketplace-mdg8m" Feb 17 13:47:40 crc kubenswrapper[4833]: I0217 13:47:40.806968 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ef940cc-d662-4a1c-aee3-09c28bfac646-utilities\") pod \"redhat-marketplace-mdg8m\" (UID: \"0ef940cc-d662-4a1c-aee3-09c28bfac646\") " pod="openshift-marketplace/redhat-marketplace-mdg8m" Feb 17 13:47:40 crc kubenswrapper[4833]: I0217 13:47:40.837182 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5bjw\" (UniqueName: \"kubernetes.io/projected/0ef940cc-d662-4a1c-aee3-09c28bfac646-kube-api-access-z5bjw\") pod \"redhat-marketplace-mdg8m\" (UID: \"0ef940cc-d662-4a1c-aee3-09c28bfac646\") " pod="openshift-marketplace/redhat-marketplace-mdg8m" Feb 17 13:47:40 crc kubenswrapper[4833]: I0217 13:47:40.888825 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 17 13:47:40 crc kubenswrapper[4833]: I0217 13:47:40.889894 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 13:47:40 crc kubenswrapper[4833]: I0217 13:47:40.893808 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 17 13:47:40 crc kubenswrapper[4833]: I0217 13:47:40.895832 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 17 13:47:40 crc kubenswrapper[4833]: I0217 13:47:40.907149 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7ad3650f-ccc3-496a-9b17-880ebc9d3eec-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7ad3650f-ccc3-496a-9b17-880ebc9d3eec\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 13:47:40 crc kubenswrapper[4833]: I0217 13:47:40.907263 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7ad3650f-ccc3-496a-9b17-880ebc9d3eec-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7ad3650f-ccc3-496a-9b17-880ebc9d3eec\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 13:47:40 crc kubenswrapper[4833]: I0217 13:47:40.928458 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 17 13:47:41 crc kubenswrapper[4833]: I0217 13:47:41.000529 4833 generic.go:334] "Generic (PLEG): container finished" podID="7866e11a-9385-4003-9406-d4012097cbb3" containerID="3fb02e365572ee24b585ca11f79f28a7086c694ba390500db43d266c32f4da90" exitCode=0 Feb 17 13:47:41 crc kubenswrapper[4833]: I0217 13:47:41.000774 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vnkbw" event={"ID":"7866e11a-9385-4003-9406-d4012097cbb3","Type":"ContainerDied","Data":"3fb02e365572ee24b585ca11f79f28a7086c694ba390500db43d266c32f4da90"} Feb 17 13:47:41 crc kubenswrapper[4833]: I0217 13:47:41.002404 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" event={"ID":"16569a9d-7677-455d-87c6-7b2fb504b731","Type":"ContainerStarted","Data":"c034575e4a954e33f8591823e7e7a086b15549f9611aeef632874d7d7d66743e"} Feb 17 13:47:41 crc kubenswrapper[4833]: I0217 13:47:41.002436 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" Feb 17 13:47:41 crc kubenswrapper[4833]: I0217 13:47:41.006129 4833 generic.go:334] "Generic (PLEG): container finished" podID="73094c1e-9f36-472d-9275-12562b4cd250" containerID="42c8912dc7cf099c2325dd22b282a3ff870fab18cbe1f90b2fa92779540618b1" exitCode=0 Feb 17 13:47:41 crc kubenswrapper[4833]: I0217 13:47:41.006195 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g6vpd" event={"ID":"73094c1e-9f36-472d-9275-12562b4cd250","Type":"ContainerDied","Data":"42c8912dc7cf099c2325dd22b282a3ff870fab18cbe1f90b2fa92779540618b1"} Feb 17 13:47:41 crc kubenswrapper[4833]: I0217 13:47:41.008053 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7ad3650f-ccc3-496a-9b17-880ebc9d3eec-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7ad3650f-ccc3-496a-9b17-880ebc9d3eec\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 13:47:41 crc kubenswrapper[4833]: I0217 13:47:41.008112 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7ad3650f-ccc3-496a-9b17-880ebc9d3eec-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7ad3650f-ccc3-496a-9b17-880ebc9d3eec\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 13:47:41 crc kubenswrapper[4833]: I0217 13:47:41.008698 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7ad3650f-ccc3-496a-9b17-880ebc9d3eec-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7ad3650f-ccc3-496a-9b17-880ebc9d3eec\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 13:47:41 crc kubenswrapper[4833]: I0217 13:47:41.010989 4833 generic.go:334] "Generic (PLEG): container finished" podID="96680c84-52a5-4ddc-a676-7a5c71c9f3f6" containerID="5c480d9fed2ed20522805aeb85c571d15fe8f2975e6a07c1829c19b5cb6205fc" exitCode=0 Feb 17 13:47:41 crc kubenswrapper[4833]: I0217 13:47:41.012011 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wrrv5" event={"ID":"96680c84-52a5-4ddc-a676-7a5c71c9f3f6","Type":"ContainerDied","Data":"5c480d9fed2ed20522805aeb85c571d15fe8f2975e6a07c1829c19b5cb6205fc"} Feb 17 13:47:41 crc kubenswrapper[4833]: I0217 13:47:41.028008 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mdg8m" Feb 17 13:47:41 crc kubenswrapper[4833]: I0217 13:47:41.049617 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7ad3650f-ccc3-496a-9b17-880ebc9d3eec-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7ad3650f-ccc3-496a-9b17-880ebc9d3eec\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 13:47:41 crc kubenswrapper[4833]: I0217 13:47:41.060347 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 17 13:47:41 crc kubenswrapper[4833]: I0217 13:47:41.060910 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s22hg"] Feb 17 13:47:41 crc kubenswrapper[4833]: I0217 13:47:41.098072 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" podStartSLOduration=130.098053417 podStartE2EDuration="2m10.098053417s" podCreationTimestamp="2026-02-17 13:45:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:47:41.094106469 +0000 UTC m=+150.729205922" watchObservedRunningTime="2026-02-17 13:47:41.098053417 +0000 UTC m=+150.733152860" Feb 17 13:47:41 crc kubenswrapper[4833]: I0217 13:47:41.193224 4833 patch_prober.go:28] interesting pod/router-default-5444994796-dmmfm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 13:47:41 crc kubenswrapper[4833]: [-]has-synced failed: reason withheld Feb 17 13:47:41 crc kubenswrapper[4833]: [+]process-running ok Feb 17 13:47:41 crc kubenswrapper[4833]: healthz check failed Feb 17 13:47:41 crc kubenswrapper[4833]: I0217 13:47:41.193286 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dmmfm" podUID="9d9814af-c66a-49fd-a3ca-814e8b0caf48" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 13:47:41 crc kubenswrapper[4833]: I0217 13:47:41.202251 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 13:47:41 crc kubenswrapper[4833]: I0217 13:47:41.347786 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mdg8m"] Feb 17 13:47:41 crc kubenswrapper[4833]: W0217 13:47:41.370429 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ef940cc_d662_4a1c_aee3_09c28bfac646.slice/crio-5ef6137cea2636138d5a69f21e0d75c281a6e5e18e6d8828e19281f16df6f701 WatchSource:0}: Error finding container 5ef6137cea2636138d5a69f21e0d75c281a6e5e18e6d8828e19281f16df6f701: Status 404 returned error can't find the container with id 5ef6137cea2636138d5a69f21e0d75c281a6e5e18e6d8828e19281f16df6f701 Feb 17 13:47:41 crc kubenswrapper[4833]: I0217 13:47:41.460907 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 17 13:47:41 crc kubenswrapper[4833]: W0217 13:47:41.482120 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod7ad3650f_ccc3_496a_9b17_880ebc9d3eec.slice/crio-96a3fe811aba56e069b4ca0c522566503ff247445743bd3ea560f4f59bb0550a WatchSource:0}: Error finding container 96a3fe811aba56e069b4ca0c522566503ff247445743bd3ea560f4f59bb0550a: Status 404 returned error can't find the container with id 96a3fe811aba56e069b4ca0c522566503ff247445743bd3ea560f4f59bb0550a Feb 17 13:47:41 crc kubenswrapper[4833]: I0217 13:47:41.648481 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wgmx8"] Feb 17 13:47:41 crc kubenswrapper[4833]: I0217 13:47:41.649769 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wgmx8" Feb 17 13:47:41 crc kubenswrapper[4833]: I0217 13:47:41.651563 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 17 13:47:41 crc kubenswrapper[4833]: I0217 13:47:41.658542 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wgmx8"] Feb 17 13:47:41 crc kubenswrapper[4833]: I0217 13:47:41.821018 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fc2132e-3150-4783-a56a-0bd9f33d4c6c-catalog-content\") pod \"redhat-operators-wgmx8\" (UID: \"5fc2132e-3150-4783-a56a-0bd9f33d4c6c\") " pod="openshift-marketplace/redhat-operators-wgmx8" Feb 17 13:47:41 crc kubenswrapper[4833]: I0217 13:47:41.821289 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fc2132e-3150-4783-a56a-0bd9f33d4c6c-utilities\") pod \"redhat-operators-wgmx8\" (UID: \"5fc2132e-3150-4783-a56a-0bd9f33d4c6c\") " pod="openshift-marketplace/redhat-operators-wgmx8" Feb 17 13:47:41 crc kubenswrapper[4833]: I0217 13:47:41.821508 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxg8s\" (UniqueName: \"kubernetes.io/projected/5fc2132e-3150-4783-a56a-0bd9f33d4c6c-kube-api-access-qxg8s\") pod \"redhat-operators-wgmx8\" (UID: \"5fc2132e-3150-4783-a56a-0bd9f33d4c6c\") " pod="openshift-marketplace/redhat-operators-wgmx8" Feb 17 13:47:41 crc kubenswrapper[4833]: I0217 13:47:41.886724 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-btl28" Feb 17 13:47:41 crc kubenswrapper[4833]: I0217 13:47:41.924542 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fc2132e-3150-4783-a56a-0bd9f33d4c6c-utilities\") pod \"redhat-operators-wgmx8\" (UID: \"5fc2132e-3150-4783-a56a-0bd9f33d4c6c\") " pod="openshift-marketplace/redhat-operators-wgmx8" Feb 17 13:47:41 crc kubenswrapper[4833]: I0217 13:47:41.924819 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxg8s\" (UniqueName: \"kubernetes.io/projected/5fc2132e-3150-4783-a56a-0bd9f33d4c6c-kube-api-access-qxg8s\") pod \"redhat-operators-wgmx8\" (UID: \"5fc2132e-3150-4783-a56a-0bd9f33d4c6c\") " pod="openshift-marketplace/redhat-operators-wgmx8" Feb 17 13:47:41 crc kubenswrapper[4833]: I0217 13:47:41.924887 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fc2132e-3150-4783-a56a-0bd9f33d4c6c-catalog-content\") pod \"redhat-operators-wgmx8\" (UID: \"5fc2132e-3150-4783-a56a-0bd9f33d4c6c\") " pod="openshift-marketplace/redhat-operators-wgmx8" Feb 17 13:47:41 crc kubenswrapper[4833]: I0217 13:47:41.925285 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fc2132e-3150-4783-a56a-0bd9f33d4c6c-catalog-content\") pod \"redhat-operators-wgmx8\" (UID: \"5fc2132e-3150-4783-a56a-0bd9f33d4c6c\") " pod="openshift-marketplace/redhat-operators-wgmx8" Feb 17 13:47:41 crc kubenswrapper[4833]: I0217 13:47:41.925489 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fc2132e-3150-4783-a56a-0bd9f33d4c6c-utilities\") pod \"redhat-operators-wgmx8\" (UID: \"5fc2132e-3150-4783-a56a-0bd9f33d4c6c\") " pod="openshift-marketplace/redhat-operators-wgmx8" Feb 17 13:47:41 crc kubenswrapper[4833]: I0217 13:47:41.958091 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxg8s\" (UniqueName: \"kubernetes.io/projected/5fc2132e-3150-4783-a56a-0bd9f33d4c6c-kube-api-access-qxg8s\") pod \"redhat-operators-wgmx8\" (UID: \"5fc2132e-3150-4783-a56a-0bd9f33d4c6c\") " pod="openshift-marketplace/redhat-operators-wgmx8" Feb 17 13:47:41 crc kubenswrapper[4833]: I0217 13:47:41.967767 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-4wnpt" Feb 17 13:47:41 crc kubenswrapper[4833]: I0217 13:47:41.968871 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wgmx8" Feb 17 13:47:42 crc kubenswrapper[4833]: I0217 13:47:42.017928 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-b8pbt" Feb 17 13:47:42 crc kubenswrapper[4833]: I0217 13:47:42.026338 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-b8pbt" Feb 17 13:47:42 crc kubenswrapper[4833]: I0217 13:47:42.049734 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7wzzq"] Feb 17 13:47:42 crc kubenswrapper[4833]: I0217 13:47:42.053747 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-xwz2m" Feb 17 13:47:42 crc kubenswrapper[4833]: I0217 13:47:42.053778 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-xwz2m" Feb 17 13:47:42 crc kubenswrapper[4833]: I0217 13:47:42.054137 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7wzzq" Feb 17 13:47:42 crc kubenswrapper[4833]: I0217 13:47:42.063424 4833 patch_prober.go:28] interesting pod/console-f9d7485db-xwz2m container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Feb 17 13:47:42 crc kubenswrapper[4833]: I0217 13:47:42.063486 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-xwz2m" podUID="b617c42f-c749-41e5-a305-692a4c631656" containerName="console" probeResult="failure" output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" Feb 17 13:47:42 crc kubenswrapper[4833]: I0217 13:47:42.067382 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7wzzq"] Feb 17 13:47:42 crc kubenswrapper[4833]: I0217 13:47:42.088025 4833 generic.go:334] "Generic (PLEG): container finished" podID="aa2d375b-2949-4037-a7f2-33fff8c9fde6" containerID="f83a98b66d78b7cfbe5b8f5486c4da0c5577934efbeede77345bc35544ade1f2" exitCode=0 Feb 17 13:47:42 crc kubenswrapper[4833]: I0217 13:47:42.088092 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522265-s4986" event={"ID":"aa2d375b-2949-4037-a7f2-33fff8c9fde6","Type":"ContainerDied","Data":"f83a98b66d78b7cfbe5b8f5486c4da0c5577934efbeede77345bc35544ade1f2"} Feb 17 13:47:42 crc kubenswrapper[4833]: I0217 13:47:42.168101 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7hnk\" (UniqueName: \"kubernetes.io/projected/347afd5b-1c04-4364-87ef-82bf98a454a6-kube-api-access-b7hnk\") pod \"redhat-operators-7wzzq\" (UID: \"347afd5b-1c04-4364-87ef-82bf98a454a6\") " pod="openshift-marketplace/redhat-operators-7wzzq" Feb 17 13:47:42 crc kubenswrapper[4833]: I0217 13:47:42.168266 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/347afd5b-1c04-4364-87ef-82bf98a454a6-utilities\") pod \"redhat-operators-7wzzq\" (UID: \"347afd5b-1c04-4364-87ef-82bf98a454a6\") " pod="openshift-marketplace/redhat-operators-7wzzq" Feb 17 13:47:42 crc kubenswrapper[4833]: I0217 13:47:42.168456 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/347afd5b-1c04-4364-87ef-82bf98a454a6-catalog-content\") pod \"redhat-operators-7wzzq\" (UID: \"347afd5b-1c04-4364-87ef-82bf98a454a6\") " pod="openshift-marketplace/redhat-operators-7wzzq" Feb 17 13:47:42 crc kubenswrapper[4833]: I0217 13:47:42.172225 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"7ad3650f-ccc3-496a-9b17-880ebc9d3eec","Type":"ContainerStarted","Data":"f3e413868a3736190d0aec848ea47609e919ffa3ca3cdf5cc5b22bbf68c6ac8c"} Feb 17 13:47:42 crc kubenswrapper[4833]: I0217 13:47:42.172272 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"7ad3650f-ccc3-496a-9b17-880ebc9d3eec","Type":"ContainerStarted","Data":"96a3fe811aba56e069b4ca0c522566503ff247445743bd3ea560f4f59bb0550a"} Feb 17 13:47:42 crc kubenswrapper[4833]: I0217 13:47:42.179939 4833 generic.go:334] "Generic (PLEG): container finished" podID="811bfe90-4b33-4bfb-969f-63d5dbde1b94" containerID="c76e804eeccff1033ad19c1f6589acf2fb88d30d34de5ac9db6d48a0ba226ebe" exitCode=0 Feb 17 13:47:42 crc kubenswrapper[4833]: I0217 13:47:42.180086 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s22hg" event={"ID":"811bfe90-4b33-4bfb-969f-63d5dbde1b94","Type":"ContainerDied","Data":"c76e804eeccff1033ad19c1f6589acf2fb88d30d34de5ac9db6d48a0ba226ebe"} Feb 17 13:47:42 crc kubenswrapper[4833]: I0217 13:47:42.180116 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s22hg" event={"ID":"811bfe90-4b33-4bfb-969f-63d5dbde1b94","Type":"ContainerStarted","Data":"2062c6948ddc458e86648c97708d1779bc1229e79c15f7fc28cbb60d8bea22e4"} Feb 17 13:47:42 crc kubenswrapper[4833]: I0217 13:47:42.185216 4833 generic.go:334] "Generic (PLEG): container finished" podID="0ef940cc-d662-4a1c-aee3-09c28bfac646" containerID="6faa6244028001a3cbc5aec840a5fd13b54f71c92fa84a8d4a988f2aed9f40b4" exitCode=0 Feb 17 13:47:42 crc kubenswrapper[4833]: I0217 13:47:42.185446 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mdg8m" event={"ID":"0ef940cc-d662-4a1c-aee3-09c28bfac646","Type":"ContainerDied","Data":"6faa6244028001a3cbc5aec840a5fd13b54f71c92fa84a8d4a988f2aed9f40b4"} Feb 17 13:47:42 crc kubenswrapper[4833]: I0217 13:47:42.185484 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mdg8m" event={"ID":"0ef940cc-d662-4a1c-aee3-09c28bfac646","Type":"ContainerStarted","Data":"5ef6137cea2636138d5a69f21e0d75c281a6e5e18e6d8828e19281f16df6f701"} Feb 17 13:47:42 crc kubenswrapper[4833]: I0217 13:47:42.187152 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-dmmfm" Feb 17 13:47:42 crc kubenswrapper[4833]: I0217 13:47:42.190286 4833 patch_prober.go:28] interesting pod/router-default-5444994796-dmmfm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 13:47:42 crc kubenswrapper[4833]: [-]has-synced failed: reason withheld Feb 17 13:47:42 crc kubenswrapper[4833]: [+]process-running ok Feb 17 13:47:42 crc kubenswrapper[4833]: healthz check failed Feb 17 13:47:42 crc kubenswrapper[4833]: I0217 13:47:42.190320 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dmmfm" podUID="9d9814af-c66a-49fd-a3ca-814e8b0caf48" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 13:47:42 crc kubenswrapper[4833]: I0217 13:47:42.213766 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.213750112 podStartE2EDuration="2.213750112s" podCreationTimestamp="2026-02-17 13:47:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:47:42.193957424 +0000 UTC m=+151.829056857" watchObservedRunningTime="2026-02-17 13:47:42.213750112 +0000 UTC m=+151.848849545" Feb 17 13:47:42 crc kubenswrapper[4833]: I0217 13:47:42.240183 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xdxhg" Feb 17 13:47:42 crc kubenswrapper[4833]: I0217 13:47:42.261090 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xdxhg" Feb 17 13:47:42 crc kubenswrapper[4833]: I0217 13:47:42.270050 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7hnk\" (UniqueName: \"kubernetes.io/projected/347afd5b-1c04-4364-87ef-82bf98a454a6-kube-api-access-b7hnk\") pod \"redhat-operators-7wzzq\" (UID: \"347afd5b-1c04-4364-87ef-82bf98a454a6\") " pod="openshift-marketplace/redhat-operators-7wzzq" Feb 17 13:47:42 crc kubenswrapper[4833]: I0217 13:47:42.270122 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/347afd5b-1c04-4364-87ef-82bf98a454a6-utilities\") pod \"redhat-operators-7wzzq\" (UID: \"347afd5b-1c04-4364-87ef-82bf98a454a6\") " pod="openshift-marketplace/redhat-operators-7wzzq" Feb 17 13:47:42 crc kubenswrapper[4833]: I0217 13:47:42.270216 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/347afd5b-1c04-4364-87ef-82bf98a454a6-catalog-content\") pod \"redhat-operators-7wzzq\" (UID: \"347afd5b-1c04-4364-87ef-82bf98a454a6\") " pod="openshift-marketplace/redhat-operators-7wzzq" Feb 17 13:47:42 crc kubenswrapper[4833]: I0217 13:47:42.272408 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/347afd5b-1c04-4364-87ef-82bf98a454a6-catalog-content\") pod \"redhat-operators-7wzzq\" (UID: \"347afd5b-1c04-4364-87ef-82bf98a454a6\") " pod="openshift-marketplace/redhat-operators-7wzzq" Feb 17 13:47:42 crc kubenswrapper[4833]: I0217 13:47:42.272592 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/347afd5b-1c04-4364-87ef-82bf98a454a6-utilities\") pod \"redhat-operators-7wzzq\" (UID: \"347afd5b-1c04-4364-87ef-82bf98a454a6\") " pod="openshift-marketplace/redhat-operators-7wzzq" Feb 17 13:47:42 crc kubenswrapper[4833]: I0217 13:47:42.323265 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7hnk\" (UniqueName: \"kubernetes.io/projected/347afd5b-1c04-4364-87ef-82bf98a454a6-kube-api-access-b7hnk\") pod \"redhat-operators-7wzzq\" (UID: \"347afd5b-1c04-4364-87ef-82bf98a454a6\") " pod="openshift-marketplace/redhat-operators-7wzzq" Feb 17 13:47:42 crc kubenswrapper[4833]: I0217 13:47:42.374882 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7wzzq" Feb 17 13:47:42 crc kubenswrapper[4833]: I0217 13:47:42.389534 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-h2rpb" Feb 17 13:47:42 crc kubenswrapper[4833]: I0217 13:47:42.507833 4833 patch_prober.go:28] interesting pod/downloads-7954f5f757-fps2n container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Feb 17 13:47:42 crc kubenswrapper[4833]: I0217 13:47:42.507885 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fps2n" podUID="c91cc697-a95c-4c07-8750-878937f50446" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" Feb 17 13:47:42 crc kubenswrapper[4833]: I0217 13:47:42.508207 4833 patch_prober.go:28] interesting pod/downloads-7954f5f757-fps2n container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Feb 17 13:47:42 crc kubenswrapper[4833]: I0217 13:47:42.508221 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-fps2n" podUID="c91cc697-a95c-4c07-8750-878937f50446" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" Feb 17 13:47:42 crc kubenswrapper[4833]: I0217 13:47:42.565286 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-6nkfm" Feb 17 13:47:42 crc kubenswrapper[4833]: I0217 13:47:42.608559 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wgmx8"] Feb 17 13:47:42 crc kubenswrapper[4833]: W0217 13:47:42.641836 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fc2132e_3150_4783_a56a_0bd9f33d4c6c.slice/crio-03d7882f5bfb4779adb60b00c8e7be3e81db47db83f4cdfab9d44a0ea57171a1 WatchSource:0}: Error finding container 03d7882f5bfb4779adb60b00c8e7be3e81db47db83f4cdfab9d44a0ea57171a1: Status 404 returned error can't find the container with id 03d7882f5bfb4779adb60b00c8e7be3e81db47db83f4cdfab9d44a0ea57171a1 Feb 17 13:47:42 crc kubenswrapper[4833]: I0217 13:47:42.802274 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7wzzq"] Feb 17 13:47:42 crc kubenswrapper[4833]: W0217 13:47:42.901863 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod347afd5b_1c04_4364_87ef_82bf98a454a6.slice/crio-c7cc0754aa64e209a160de6b1d2a9d3f6df12e10f16afefc6f3c0de8e6b2bf4a WatchSource:0}: Error finding container c7cc0754aa64e209a160de6b1d2a9d3f6df12e10f16afefc6f3c0de8e6b2bf4a: Status 404 returned error can't find the container with id c7cc0754aa64e209a160de6b1d2a9d3f6df12e10f16afefc6f3c0de8e6b2bf4a Feb 17 13:47:43 crc kubenswrapper[4833]: I0217 13:47:43.193725 4833 patch_prober.go:28] interesting pod/router-default-5444994796-dmmfm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 13:47:43 crc kubenswrapper[4833]: [-]has-synced failed: reason withheld Feb 17 13:47:43 crc kubenswrapper[4833]: [+]process-running ok Feb 17 13:47:43 crc kubenswrapper[4833]: healthz check failed Feb 17 13:47:43 crc kubenswrapper[4833]: I0217 13:47:43.193794 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dmmfm" podUID="9d9814af-c66a-49fd-a3ca-814e8b0caf48" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 13:47:43 crc kubenswrapper[4833]: I0217 13:47:43.241788 4833 generic.go:334] "Generic (PLEG): container finished" podID="7ad3650f-ccc3-496a-9b17-880ebc9d3eec" containerID="f3e413868a3736190d0aec848ea47609e919ffa3ca3cdf5cc5b22bbf68c6ac8c" exitCode=0 Feb 17 13:47:43 crc kubenswrapper[4833]: I0217 13:47:43.241900 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"7ad3650f-ccc3-496a-9b17-880ebc9d3eec","Type":"ContainerDied","Data":"f3e413868a3736190d0aec848ea47609e919ffa3ca3cdf5cc5b22bbf68c6ac8c"} Feb 17 13:47:43 crc kubenswrapper[4833]: I0217 13:47:43.249822 4833 generic.go:334] "Generic (PLEG): container finished" podID="5fc2132e-3150-4783-a56a-0bd9f33d4c6c" containerID="49d2206085a8dc072ac1485fc650c5998f664712c2ea8148b3930b83ae095f25" exitCode=0 Feb 17 13:47:43 crc kubenswrapper[4833]: I0217 13:47:43.249880 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wgmx8" event={"ID":"5fc2132e-3150-4783-a56a-0bd9f33d4c6c","Type":"ContainerDied","Data":"49d2206085a8dc072ac1485fc650c5998f664712c2ea8148b3930b83ae095f25"} Feb 17 13:47:43 crc kubenswrapper[4833]: I0217 13:47:43.249903 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wgmx8" event={"ID":"5fc2132e-3150-4783-a56a-0bd9f33d4c6c","Type":"ContainerStarted","Data":"03d7882f5bfb4779adb60b00c8e7be3e81db47db83f4cdfab9d44a0ea57171a1"} Feb 17 13:47:43 crc kubenswrapper[4833]: I0217 13:47:43.258379 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7wzzq" event={"ID":"347afd5b-1c04-4364-87ef-82bf98a454a6","Type":"ContainerStarted","Data":"d7fd28fe7009a80f5d1c94ae718482dc734408e40b441f9bbdb0700df17a910d"} Feb 17 13:47:43 crc kubenswrapper[4833]: I0217 13:47:43.258425 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7wzzq" event={"ID":"347afd5b-1c04-4364-87ef-82bf98a454a6","Type":"ContainerStarted","Data":"c7cc0754aa64e209a160de6b1d2a9d3f6df12e10f16afefc6f3c0de8e6b2bf4a"} Feb 17 13:47:43 crc kubenswrapper[4833]: I0217 13:47:43.598166 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522265-s4986" Feb 17 13:47:43 crc kubenswrapper[4833]: I0217 13:47:43.715241 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa2d375b-2949-4037-a7f2-33fff8c9fde6-config-volume\") pod \"aa2d375b-2949-4037-a7f2-33fff8c9fde6\" (UID: \"aa2d375b-2949-4037-a7f2-33fff8c9fde6\") " Feb 17 13:47:43 crc kubenswrapper[4833]: I0217 13:47:43.715304 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aa2d375b-2949-4037-a7f2-33fff8c9fde6-secret-volume\") pod \"aa2d375b-2949-4037-a7f2-33fff8c9fde6\" (UID: \"aa2d375b-2949-4037-a7f2-33fff8c9fde6\") " Feb 17 13:47:43 crc kubenswrapper[4833]: I0217 13:47:43.715350 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lcnjp\" (UniqueName: \"kubernetes.io/projected/aa2d375b-2949-4037-a7f2-33fff8c9fde6-kube-api-access-lcnjp\") pod \"aa2d375b-2949-4037-a7f2-33fff8c9fde6\" (UID: \"aa2d375b-2949-4037-a7f2-33fff8c9fde6\") " Feb 17 13:47:43 crc kubenswrapper[4833]: I0217 13:47:43.716598 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa2d375b-2949-4037-a7f2-33fff8c9fde6-config-volume" (OuterVolumeSpecName: "config-volume") pod "aa2d375b-2949-4037-a7f2-33fff8c9fde6" (UID: "aa2d375b-2949-4037-a7f2-33fff8c9fde6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:47:43 crc kubenswrapper[4833]: I0217 13:47:43.721660 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa2d375b-2949-4037-a7f2-33fff8c9fde6-kube-api-access-lcnjp" (OuterVolumeSpecName: "kube-api-access-lcnjp") pod "aa2d375b-2949-4037-a7f2-33fff8c9fde6" (UID: "aa2d375b-2949-4037-a7f2-33fff8c9fde6"). InnerVolumeSpecName "kube-api-access-lcnjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:47:43 crc kubenswrapper[4833]: I0217 13:47:43.721999 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa2d375b-2949-4037-a7f2-33fff8c9fde6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "aa2d375b-2949-4037-a7f2-33fff8c9fde6" (UID: "aa2d375b-2949-4037-a7f2-33fff8c9fde6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:47:43 crc kubenswrapper[4833]: I0217 13:47:43.816718 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lcnjp\" (UniqueName: \"kubernetes.io/projected/aa2d375b-2949-4037-a7f2-33fff8c9fde6-kube-api-access-lcnjp\") on node \"crc\" DevicePath \"\"" Feb 17 13:47:43 crc kubenswrapper[4833]: I0217 13:47:43.816761 4833 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa2d375b-2949-4037-a7f2-33fff8c9fde6-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 13:47:43 crc kubenswrapper[4833]: I0217 13:47:43.816774 4833 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aa2d375b-2949-4037-a7f2-33fff8c9fde6-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 13:47:43 crc kubenswrapper[4833]: I0217 13:47:43.922860 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 17 13:47:43 crc kubenswrapper[4833]: E0217 13:47:43.924351 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa2d375b-2949-4037-a7f2-33fff8c9fde6" containerName="collect-profiles" Feb 17 13:47:43 crc kubenswrapper[4833]: I0217 13:47:43.924370 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa2d375b-2949-4037-a7f2-33fff8c9fde6" containerName="collect-profiles" Feb 17 13:47:43 crc kubenswrapper[4833]: I0217 13:47:43.924511 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa2d375b-2949-4037-a7f2-33fff8c9fde6" containerName="collect-profiles" Feb 17 13:47:43 crc kubenswrapper[4833]: I0217 13:47:43.925305 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 13:47:43 crc kubenswrapper[4833]: I0217 13:47:43.927397 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 17 13:47:43 crc kubenswrapper[4833]: I0217 13:47:43.928403 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 17 13:47:43 crc kubenswrapper[4833]: I0217 13:47:43.930007 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 17 13:47:44 crc kubenswrapper[4833]: I0217 13:47:44.019775 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/51dbd49e-584c-4d74-9412-d3fbc60341bd-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"51dbd49e-584c-4d74-9412-d3fbc60341bd\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 13:47:44 crc kubenswrapper[4833]: I0217 13:47:44.019823 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/51dbd49e-584c-4d74-9412-d3fbc60341bd-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"51dbd49e-584c-4d74-9412-d3fbc60341bd\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 13:47:44 crc kubenswrapper[4833]: I0217 13:47:44.120572 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/51dbd49e-584c-4d74-9412-d3fbc60341bd-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"51dbd49e-584c-4d74-9412-d3fbc60341bd\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 13:47:44 crc kubenswrapper[4833]: I0217 13:47:44.120613 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/51dbd49e-584c-4d74-9412-d3fbc60341bd-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"51dbd49e-584c-4d74-9412-d3fbc60341bd\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 13:47:44 crc kubenswrapper[4833]: I0217 13:47:44.135194 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/51dbd49e-584c-4d74-9412-d3fbc60341bd-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"51dbd49e-584c-4d74-9412-d3fbc60341bd\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 13:47:44 crc kubenswrapper[4833]: I0217 13:47:44.161845 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/51dbd49e-584c-4d74-9412-d3fbc60341bd-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"51dbd49e-584c-4d74-9412-d3fbc60341bd\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 13:47:44 crc kubenswrapper[4833]: I0217 13:47:44.213861 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-dmmfm" Feb 17 13:47:44 crc kubenswrapper[4833]: I0217 13:47:44.218249 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-dmmfm" Feb 17 13:47:44 crc kubenswrapper[4833]: I0217 13:47:44.244183 4833 patch_prober.go:28] interesting pod/machine-config-daemon-nmzvl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 13:47:44 crc kubenswrapper[4833]: I0217 13:47:44.244235 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" podUID="f4a1ca83-1919-4f9c-82de-c849cbd50e70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 13:47:44 crc kubenswrapper[4833]: I0217 13:47:44.244850 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 13:47:44 crc kubenswrapper[4833]: I0217 13:47:44.337829 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522265-s4986" Feb 17 13:47:44 crc kubenswrapper[4833]: I0217 13:47:44.337976 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522265-s4986" event={"ID":"aa2d375b-2949-4037-a7f2-33fff8c9fde6","Type":"ContainerDied","Data":"3c4c6403629471dbf59166ac6e5f788b63b3588b4462f1b350862b5d89156d46"} Feb 17 13:47:44 crc kubenswrapper[4833]: I0217 13:47:44.340153 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c4c6403629471dbf59166ac6e5f788b63b3588b4462f1b350862b5d89156d46" Feb 17 13:47:44 crc kubenswrapper[4833]: I0217 13:47:44.345996 4833 generic.go:334] "Generic (PLEG): container finished" podID="347afd5b-1c04-4364-87ef-82bf98a454a6" containerID="d7fd28fe7009a80f5d1c94ae718482dc734408e40b441f9bbdb0700df17a910d" exitCode=0 Feb 17 13:47:44 crc kubenswrapper[4833]: I0217 13:47:44.346370 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7wzzq" event={"ID":"347afd5b-1c04-4364-87ef-82bf98a454a6","Type":"ContainerDied","Data":"d7fd28fe7009a80f5d1c94ae718482dc734408e40b441f9bbdb0700df17a910d"} Feb 17 13:47:44 crc kubenswrapper[4833]: I0217 13:47:44.638364 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 13:47:44 crc kubenswrapper[4833]: I0217 13:47:44.695122 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 17 13:47:44 crc kubenswrapper[4833]: I0217 13:47:44.748016 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7ad3650f-ccc3-496a-9b17-880ebc9d3eec-kube-api-access\") pod \"7ad3650f-ccc3-496a-9b17-880ebc9d3eec\" (UID: \"7ad3650f-ccc3-496a-9b17-880ebc9d3eec\") " Feb 17 13:47:44 crc kubenswrapper[4833]: I0217 13:47:44.748426 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7ad3650f-ccc3-496a-9b17-880ebc9d3eec-kubelet-dir\") pod \"7ad3650f-ccc3-496a-9b17-880ebc9d3eec\" (UID: \"7ad3650f-ccc3-496a-9b17-880ebc9d3eec\") " Feb 17 13:47:44 crc kubenswrapper[4833]: I0217 13:47:44.748743 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ad3650f-ccc3-496a-9b17-880ebc9d3eec-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7ad3650f-ccc3-496a-9b17-880ebc9d3eec" (UID: "7ad3650f-ccc3-496a-9b17-880ebc9d3eec"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 13:47:44 crc kubenswrapper[4833]: I0217 13:47:44.768410 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ad3650f-ccc3-496a-9b17-880ebc9d3eec-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7ad3650f-ccc3-496a-9b17-880ebc9d3eec" (UID: "7ad3650f-ccc3-496a-9b17-880ebc9d3eec"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:47:44 crc kubenswrapper[4833]: I0217 13:47:44.852775 4833 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7ad3650f-ccc3-496a-9b17-880ebc9d3eec-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 17 13:47:44 crc kubenswrapper[4833]: I0217 13:47:44.853091 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7ad3650f-ccc3-496a-9b17-880ebc9d3eec-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 13:47:45 crc kubenswrapper[4833]: I0217 13:47:45.363977 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"7ad3650f-ccc3-496a-9b17-880ebc9d3eec","Type":"ContainerDied","Data":"96a3fe811aba56e069b4ca0c522566503ff247445743bd3ea560f4f59bb0550a"} Feb 17 13:47:45 crc kubenswrapper[4833]: I0217 13:47:45.364018 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96a3fe811aba56e069b4ca0c522566503ff247445743bd3ea560f4f59bb0550a" Feb 17 13:47:45 crc kubenswrapper[4833]: I0217 13:47:45.363991 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 13:47:45 crc kubenswrapper[4833]: I0217 13:47:45.365845 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"51dbd49e-584c-4d74-9412-d3fbc60341bd","Type":"ContainerStarted","Data":"701b2f203a4caf807bcb598d4ef5ea08927636bf34a2df457298c6fa8f8ad9ac"} Feb 17 13:47:46 crc kubenswrapper[4833]: I0217 13:47:46.387046 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"51dbd49e-584c-4d74-9412-d3fbc60341bd","Type":"ContainerStarted","Data":"117e0b4f59c849b18a755352a2d4d5c11d122c6c4dd48f383471b7c99fc130cb"} Feb 17 13:47:46 crc kubenswrapper[4833]: I0217 13:47:46.413906 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=3.413885369 podStartE2EDuration="3.413885369s" podCreationTimestamp="2026-02-17 13:47:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:47:46.411633783 +0000 UTC m=+156.046733216" watchObservedRunningTime="2026-02-17 13:47:46.413885369 +0000 UTC m=+156.048984812" Feb 17 13:47:47 crc kubenswrapper[4833]: I0217 13:47:47.401805 4833 generic.go:334] "Generic (PLEG): container finished" podID="51dbd49e-584c-4d74-9412-d3fbc60341bd" containerID="117e0b4f59c849b18a755352a2d4d5c11d122c6c4dd48f383471b7c99fc130cb" exitCode=0 Feb 17 13:47:47 crc kubenswrapper[4833]: I0217 13:47:47.401848 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"51dbd49e-584c-4d74-9412-d3fbc60341bd","Type":"ContainerDied","Data":"117e0b4f59c849b18a755352a2d4d5c11d122c6c4dd48f383471b7c99fc130cb"} Feb 17 13:47:47 crc kubenswrapper[4833]: I0217 13:47:47.955787 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-9wgts" Feb 17 13:47:52 crc kubenswrapper[4833]: I0217 13:47:52.083604 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-xwz2m" Feb 17 13:47:52 crc kubenswrapper[4833]: I0217 13:47:52.090228 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-xwz2m" Feb 17 13:47:52 crc kubenswrapper[4833]: I0217 13:47:52.507149 4833 patch_prober.go:28] interesting pod/downloads-7954f5f757-fps2n container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Feb 17 13:47:52 crc kubenswrapper[4833]: I0217 13:47:52.507201 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fps2n" podUID="c91cc697-a95c-4c07-8750-878937f50446" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" Feb 17 13:47:52 crc kubenswrapper[4833]: I0217 13:47:52.507156 4833 patch_prober.go:28] interesting pod/downloads-7954f5f757-fps2n container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Feb 17 13:47:52 crc kubenswrapper[4833]: I0217 13:47:52.507304 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-fps2n" podUID="c91cc697-a95c-4c07-8750-878937f50446" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" Feb 17 13:47:52 crc kubenswrapper[4833]: I0217 13:47:52.603203 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c-metrics-certs\") pod \"network-metrics-daemon-4b7xf\" (UID: \"892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c\") " pod="openshift-multus/network-metrics-daemon-4b7xf" Feb 17 13:47:52 crc kubenswrapper[4833]: I0217 13:47:52.617392 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c-metrics-certs\") pod \"network-metrics-daemon-4b7xf\" (UID: \"892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c\") " pod="openshift-multus/network-metrics-daemon-4b7xf" Feb 17 13:47:52 crc kubenswrapper[4833]: I0217 13:47:52.660756 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b7xf" Feb 17 13:47:54 crc kubenswrapper[4833]: I0217 13:47:54.842544 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 13:47:55 crc kubenswrapper[4833]: I0217 13:47:55.040878 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/51dbd49e-584c-4d74-9412-d3fbc60341bd-kube-api-access\") pod \"51dbd49e-584c-4d74-9412-d3fbc60341bd\" (UID: \"51dbd49e-584c-4d74-9412-d3fbc60341bd\") " Feb 17 13:47:55 crc kubenswrapper[4833]: I0217 13:47:55.040920 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/51dbd49e-584c-4d74-9412-d3fbc60341bd-kubelet-dir\") pod \"51dbd49e-584c-4d74-9412-d3fbc60341bd\" (UID: \"51dbd49e-584c-4d74-9412-d3fbc60341bd\") " Feb 17 13:47:55 crc kubenswrapper[4833]: I0217 13:47:55.041081 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/51dbd49e-584c-4d74-9412-d3fbc60341bd-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "51dbd49e-584c-4d74-9412-d3fbc60341bd" (UID: "51dbd49e-584c-4d74-9412-d3fbc60341bd"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 13:47:55 crc kubenswrapper[4833]: I0217 13:47:55.041629 4833 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/51dbd49e-584c-4d74-9412-d3fbc60341bd-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 17 13:47:55 crc kubenswrapper[4833]: I0217 13:47:55.044773 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51dbd49e-584c-4d74-9412-d3fbc60341bd-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "51dbd49e-584c-4d74-9412-d3fbc60341bd" (UID: "51dbd49e-584c-4d74-9412-d3fbc60341bd"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:47:55 crc kubenswrapper[4833]: I0217 13:47:55.143132 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/51dbd49e-584c-4d74-9412-d3fbc60341bd-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 13:47:55 crc kubenswrapper[4833]: I0217 13:47:55.468475 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"51dbd49e-584c-4d74-9412-d3fbc60341bd","Type":"ContainerDied","Data":"701b2f203a4caf807bcb598d4ef5ea08927636bf34a2df457298c6fa8f8ad9ac"} Feb 17 13:47:55 crc kubenswrapper[4833]: I0217 13:47:55.468514 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="701b2f203a4caf807bcb598d4ef5ea08927636bf34a2df457298c6fa8f8ad9ac" Feb 17 13:47:55 crc kubenswrapper[4833]: I0217 13:47:55.468563 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 13:47:58 crc kubenswrapper[4833]: I0217 13:47:58.843026 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-btl28"] Feb 17 13:47:58 crc kubenswrapper[4833]: I0217 13:47:58.843608 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-btl28" podUID="3d4005d2-d765-4a8e-9b85-8c49d8238995" containerName="controller-manager" containerID="cri-o://61177d8a937f1a00422520bd062f3b7e3453f76fc63d5b82ada6cfb5329dd2bf" gracePeriod=30 Feb 17 13:47:58 crc kubenswrapper[4833]: I0217 13:47:58.864921 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-g2gvf"] Feb 17 13:47:58 crc kubenswrapper[4833]: I0217 13:47:58.865168 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g2gvf" podUID="a441724f-e9ab-46fe-8bb3-9564e6b87e7e" containerName="route-controller-manager" containerID="cri-o://0ce9d3bea10b330fff1bfc2e790c8e27d9ae40842b136f4592ab6140e30c71ed" gracePeriod=30 Feb 17 13:47:59 crc kubenswrapper[4833]: I0217 13:47:59.501556 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" Feb 17 13:48:01 crc kubenswrapper[4833]: I0217 13:48:01.503715 4833 generic.go:334] "Generic (PLEG): container finished" podID="3d4005d2-d765-4a8e-9b85-8c49d8238995" containerID="61177d8a937f1a00422520bd062f3b7e3453f76fc63d5b82ada6cfb5329dd2bf" exitCode=0 Feb 17 13:48:01 crc kubenswrapper[4833]: I0217 13:48:01.503810 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-btl28" event={"ID":"3d4005d2-d765-4a8e-9b85-8c49d8238995","Type":"ContainerDied","Data":"61177d8a937f1a00422520bd062f3b7e3453f76fc63d5b82ada6cfb5329dd2bf"} Feb 17 13:48:01 crc kubenswrapper[4833]: I0217 13:48:01.506019 4833 generic.go:334] "Generic (PLEG): container finished" podID="a441724f-e9ab-46fe-8bb3-9564e6b87e7e" containerID="0ce9d3bea10b330fff1bfc2e790c8e27d9ae40842b136f4592ab6140e30c71ed" exitCode=0 Feb 17 13:48:01 crc kubenswrapper[4833]: I0217 13:48:01.506074 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g2gvf" event={"ID":"a441724f-e9ab-46fe-8bb3-9564e6b87e7e","Type":"ContainerDied","Data":"0ce9d3bea10b330fff1bfc2e790c8e27d9ae40842b136f4592ab6140e30c71ed"} Feb 17 13:48:01 crc kubenswrapper[4833]: I0217 13:48:01.885076 4833 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-btl28 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Feb 17 13:48:01 crc kubenswrapper[4833]: I0217 13:48:01.885154 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-btl28" podUID="3d4005d2-d765-4a8e-9b85-8c49d8238995" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Feb 17 13:48:01 crc kubenswrapper[4833]: I0217 13:48:01.949915 4833 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-g2gvf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Feb 17 13:48:01 crc kubenswrapper[4833]: I0217 13:48:01.949998 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g2gvf" podUID="a441724f-e9ab-46fe-8bb3-9564e6b87e7e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Feb 17 13:48:02 crc kubenswrapper[4833]: I0217 13:48:02.513615 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-fps2n" Feb 17 13:48:07 crc kubenswrapper[4833]: I0217 13:48:07.996879 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-4b7xf"] Feb 17 13:48:10 crc kubenswrapper[4833]: I0217 13:48:10.888961 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-btl28" Feb 17 13:48:10 crc kubenswrapper[4833]: I0217 13:48:10.894469 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g2gvf" Feb 17 13:48:10 crc kubenswrapper[4833]: I0217 13:48:10.915435 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-746d77cdc8-fvnzc"] Feb 17 13:48:10 crc kubenswrapper[4833]: E0217 13:48:10.915667 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a441724f-e9ab-46fe-8bb3-9564e6b87e7e" containerName="route-controller-manager" Feb 17 13:48:10 crc kubenswrapper[4833]: I0217 13:48:10.915684 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="a441724f-e9ab-46fe-8bb3-9564e6b87e7e" containerName="route-controller-manager" Feb 17 13:48:10 crc kubenswrapper[4833]: E0217 13:48:10.915700 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d4005d2-d765-4a8e-9b85-8c49d8238995" containerName="controller-manager" Feb 17 13:48:10 crc kubenswrapper[4833]: I0217 13:48:10.915706 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d4005d2-d765-4a8e-9b85-8c49d8238995" containerName="controller-manager" Feb 17 13:48:10 crc kubenswrapper[4833]: E0217 13:48:10.915716 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51dbd49e-584c-4d74-9412-d3fbc60341bd" containerName="pruner" Feb 17 13:48:10 crc kubenswrapper[4833]: I0217 13:48:10.915723 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="51dbd49e-584c-4d74-9412-d3fbc60341bd" containerName="pruner" Feb 17 13:48:10 crc kubenswrapper[4833]: E0217 13:48:10.915732 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ad3650f-ccc3-496a-9b17-880ebc9d3eec" containerName="pruner" Feb 17 13:48:10 crc kubenswrapper[4833]: I0217 13:48:10.915738 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ad3650f-ccc3-496a-9b17-880ebc9d3eec" containerName="pruner" Feb 17 13:48:10 crc kubenswrapper[4833]: I0217 13:48:10.915823 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="a441724f-e9ab-46fe-8bb3-9564e6b87e7e" containerName="route-controller-manager" Feb 17 13:48:10 crc kubenswrapper[4833]: I0217 13:48:10.915836 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ad3650f-ccc3-496a-9b17-880ebc9d3eec" containerName="pruner" Feb 17 13:48:10 crc kubenswrapper[4833]: I0217 13:48:10.915845 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d4005d2-d765-4a8e-9b85-8c49d8238995" containerName="controller-manager" Feb 17 13:48:10 crc kubenswrapper[4833]: I0217 13:48:10.915852 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="51dbd49e-584c-4d74-9412-d3fbc60341bd" containerName="pruner" Feb 17 13:48:10 crc kubenswrapper[4833]: I0217 13:48:10.916204 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-746d77cdc8-fvnzc" Feb 17 13:48:10 crc kubenswrapper[4833]: I0217 13:48:10.923471 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-746d77cdc8-fvnzc"] Feb 17 13:48:11 crc kubenswrapper[4833]: I0217 13:48:11.048862 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d4005d2-d765-4a8e-9b85-8c49d8238995-serving-cert\") pod \"3d4005d2-d765-4a8e-9b85-8c49d8238995\" (UID: \"3d4005d2-d765-4a8e-9b85-8c49d8238995\") " Feb 17 13:48:11 crc kubenswrapper[4833]: I0217 13:48:11.048907 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3d4005d2-d765-4a8e-9b85-8c49d8238995-proxy-ca-bundles\") pod \"3d4005d2-d765-4a8e-9b85-8c49d8238995\" (UID: \"3d4005d2-d765-4a8e-9b85-8c49d8238995\") " Feb 17 13:48:11 crc kubenswrapper[4833]: I0217 13:48:11.048967 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a441724f-e9ab-46fe-8bb3-9564e6b87e7e-serving-cert\") pod \"a441724f-e9ab-46fe-8bb3-9564e6b87e7e\" (UID: \"a441724f-e9ab-46fe-8bb3-9564e6b87e7e\") " Feb 17 13:48:11 crc kubenswrapper[4833]: I0217 13:48:11.048990 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a441724f-e9ab-46fe-8bb3-9564e6b87e7e-client-ca\") pod \"a441724f-e9ab-46fe-8bb3-9564e6b87e7e\" (UID: \"a441724f-e9ab-46fe-8bb3-9564e6b87e7e\") " Feb 17 13:48:11 crc kubenswrapper[4833]: I0217 13:48:11.049009 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5f77r\" (UniqueName: \"kubernetes.io/projected/a441724f-e9ab-46fe-8bb3-9564e6b87e7e-kube-api-access-5f77r\") pod \"a441724f-e9ab-46fe-8bb3-9564e6b87e7e\" (UID: \"a441724f-e9ab-46fe-8bb3-9564e6b87e7e\") " Feb 17 13:48:11 crc kubenswrapper[4833]: I0217 13:48:11.049057 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d4005d2-d765-4a8e-9b85-8c49d8238995-config\") pod \"3d4005d2-d765-4a8e-9b85-8c49d8238995\" (UID: \"3d4005d2-d765-4a8e-9b85-8c49d8238995\") " Feb 17 13:48:11 crc kubenswrapper[4833]: I0217 13:48:11.049079 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a441724f-e9ab-46fe-8bb3-9564e6b87e7e-config\") pod \"a441724f-e9ab-46fe-8bb3-9564e6b87e7e\" (UID: \"a441724f-e9ab-46fe-8bb3-9564e6b87e7e\") " Feb 17 13:48:11 crc kubenswrapper[4833]: I0217 13:48:11.049106 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xp96f\" (UniqueName: \"kubernetes.io/projected/3d4005d2-d765-4a8e-9b85-8c49d8238995-kube-api-access-xp96f\") pod \"3d4005d2-d765-4a8e-9b85-8c49d8238995\" (UID: \"3d4005d2-d765-4a8e-9b85-8c49d8238995\") " Feb 17 13:48:11 crc kubenswrapper[4833]: I0217 13:48:11.049132 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d4005d2-d765-4a8e-9b85-8c49d8238995-client-ca\") pod \"3d4005d2-d765-4a8e-9b85-8c49d8238995\" (UID: \"3d4005d2-d765-4a8e-9b85-8c49d8238995\") " Feb 17 13:48:11 crc kubenswrapper[4833]: I0217 13:48:11.049308 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df0fc04a-005e-4507-91ff-3ed49093754e-config\") pod \"controller-manager-746d77cdc8-fvnzc\" (UID: \"df0fc04a-005e-4507-91ff-3ed49093754e\") " pod="openshift-controller-manager/controller-manager-746d77cdc8-fvnzc" Feb 17 13:48:11 crc kubenswrapper[4833]: I0217 13:48:11.049350 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df0fc04a-005e-4507-91ff-3ed49093754e-serving-cert\") pod \"controller-manager-746d77cdc8-fvnzc\" (UID: \"df0fc04a-005e-4507-91ff-3ed49093754e\") " pod="openshift-controller-manager/controller-manager-746d77cdc8-fvnzc" Feb 17 13:48:11 crc kubenswrapper[4833]: I0217 13:48:11.049373 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df0fc04a-005e-4507-91ff-3ed49093754e-client-ca\") pod \"controller-manager-746d77cdc8-fvnzc\" (UID: \"df0fc04a-005e-4507-91ff-3ed49093754e\") " pod="openshift-controller-manager/controller-manager-746d77cdc8-fvnzc" Feb 17 13:48:11 crc kubenswrapper[4833]: I0217 13:48:11.049392 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7qt4\" (UniqueName: \"kubernetes.io/projected/df0fc04a-005e-4507-91ff-3ed49093754e-kube-api-access-t7qt4\") pod \"controller-manager-746d77cdc8-fvnzc\" (UID: \"df0fc04a-005e-4507-91ff-3ed49093754e\") " pod="openshift-controller-manager/controller-manager-746d77cdc8-fvnzc" Feb 17 13:48:11 crc kubenswrapper[4833]: I0217 13:48:11.049425 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/df0fc04a-005e-4507-91ff-3ed49093754e-proxy-ca-bundles\") pod \"controller-manager-746d77cdc8-fvnzc\" (UID: \"df0fc04a-005e-4507-91ff-3ed49093754e\") " pod="openshift-controller-manager/controller-manager-746d77cdc8-fvnzc" Feb 17 13:48:11 crc kubenswrapper[4833]: I0217 13:48:11.050750 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d4005d2-d765-4a8e-9b85-8c49d8238995-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "3d4005d2-d765-4a8e-9b85-8c49d8238995" (UID: "3d4005d2-d765-4a8e-9b85-8c49d8238995"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:48:11 crc kubenswrapper[4833]: I0217 13:48:11.051324 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d4005d2-d765-4a8e-9b85-8c49d8238995-client-ca" (OuterVolumeSpecName: "client-ca") pod "3d4005d2-d765-4a8e-9b85-8c49d8238995" (UID: "3d4005d2-d765-4a8e-9b85-8c49d8238995"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:48:11 crc kubenswrapper[4833]: I0217 13:48:11.051525 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a441724f-e9ab-46fe-8bb3-9564e6b87e7e-client-ca" (OuterVolumeSpecName: "client-ca") pod "a441724f-e9ab-46fe-8bb3-9564e6b87e7e" (UID: "a441724f-e9ab-46fe-8bb3-9564e6b87e7e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:48:11 crc kubenswrapper[4833]: I0217 13:48:11.051677 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a441724f-e9ab-46fe-8bb3-9564e6b87e7e-config" (OuterVolumeSpecName: "config") pod "a441724f-e9ab-46fe-8bb3-9564e6b87e7e" (UID: "a441724f-e9ab-46fe-8bb3-9564e6b87e7e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:48:11 crc kubenswrapper[4833]: I0217 13:48:11.052343 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d4005d2-d765-4a8e-9b85-8c49d8238995-config" (OuterVolumeSpecName: "config") pod "3d4005d2-d765-4a8e-9b85-8c49d8238995" (UID: "3d4005d2-d765-4a8e-9b85-8c49d8238995"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:48:11 crc kubenswrapper[4833]: I0217 13:48:11.057165 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a441724f-e9ab-46fe-8bb3-9564e6b87e7e-kube-api-access-5f77r" (OuterVolumeSpecName: "kube-api-access-5f77r") pod "a441724f-e9ab-46fe-8bb3-9564e6b87e7e" (UID: "a441724f-e9ab-46fe-8bb3-9564e6b87e7e"). InnerVolumeSpecName "kube-api-access-5f77r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:48:11 crc kubenswrapper[4833]: I0217 13:48:11.068181 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d4005d2-d765-4a8e-9b85-8c49d8238995-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3d4005d2-d765-4a8e-9b85-8c49d8238995" (UID: "3d4005d2-d765-4a8e-9b85-8c49d8238995"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:48:11 crc kubenswrapper[4833]: I0217 13:48:11.068275 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a441724f-e9ab-46fe-8bb3-9564e6b87e7e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a441724f-e9ab-46fe-8bb3-9564e6b87e7e" (UID: "a441724f-e9ab-46fe-8bb3-9564e6b87e7e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:48:11 crc kubenswrapper[4833]: I0217 13:48:11.068446 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d4005d2-d765-4a8e-9b85-8c49d8238995-kube-api-access-xp96f" (OuterVolumeSpecName: "kube-api-access-xp96f") pod "3d4005d2-d765-4a8e-9b85-8c49d8238995" (UID: "3d4005d2-d765-4a8e-9b85-8c49d8238995"). InnerVolumeSpecName "kube-api-access-xp96f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:48:11 crc kubenswrapper[4833]: I0217 13:48:11.151420 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df0fc04a-005e-4507-91ff-3ed49093754e-config\") pod \"controller-manager-746d77cdc8-fvnzc\" (UID: \"df0fc04a-005e-4507-91ff-3ed49093754e\") " pod="openshift-controller-manager/controller-manager-746d77cdc8-fvnzc" Feb 17 13:48:11 crc kubenswrapper[4833]: I0217 13:48:11.151865 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df0fc04a-005e-4507-91ff-3ed49093754e-serving-cert\") pod \"controller-manager-746d77cdc8-fvnzc\" (UID: \"df0fc04a-005e-4507-91ff-3ed49093754e\") " pod="openshift-controller-manager/controller-manager-746d77cdc8-fvnzc" Feb 17 13:48:11 crc kubenswrapper[4833]: I0217 13:48:11.151947 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df0fc04a-005e-4507-91ff-3ed49093754e-client-ca\") pod \"controller-manager-746d77cdc8-fvnzc\" (UID: \"df0fc04a-005e-4507-91ff-3ed49093754e\") " pod="openshift-controller-manager/controller-manager-746d77cdc8-fvnzc" Feb 17 13:48:11 crc kubenswrapper[4833]: I0217 13:48:11.152061 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7qt4\" (UniqueName: \"kubernetes.io/projected/df0fc04a-005e-4507-91ff-3ed49093754e-kube-api-access-t7qt4\") pod \"controller-manager-746d77cdc8-fvnzc\" (UID: \"df0fc04a-005e-4507-91ff-3ed49093754e\") " pod="openshift-controller-manager/controller-manager-746d77cdc8-fvnzc" Feb 17 13:48:11 crc kubenswrapper[4833]: I0217 13:48:11.152115 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/df0fc04a-005e-4507-91ff-3ed49093754e-proxy-ca-bundles\") pod \"controller-manager-746d77cdc8-fvnzc\" (UID: \"df0fc04a-005e-4507-91ff-3ed49093754e\") " pod="openshift-controller-manager/controller-manager-746d77cdc8-fvnzc" Feb 17 13:48:11 crc kubenswrapper[4833]: I0217 13:48:11.152155 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a441724f-e9ab-46fe-8bb3-9564e6b87e7e-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:11 crc kubenswrapper[4833]: I0217 13:48:11.152168 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xp96f\" (UniqueName: \"kubernetes.io/projected/3d4005d2-d765-4a8e-9b85-8c49d8238995-kube-api-access-xp96f\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:11 crc kubenswrapper[4833]: I0217 13:48:11.152177 4833 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d4005d2-d765-4a8e-9b85-8c49d8238995-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:11 crc kubenswrapper[4833]: I0217 13:48:11.152186 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d4005d2-d765-4a8e-9b85-8c49d8238995-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:11 crc kubenswrapper[4833]: I0217 13:48:11.152193 4833 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3d4005d2-d765-4a8e-9b85-8c49d8238995-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:11 crc kubenswrapper[4833]: I0217 13:48:11.152203 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a441724f-e9ab-46fe-8bb3-9564e6b87e7e-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:11 crc kubenswrapper[4833]: I0217 13:48:11.152210 4833 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a441724f-e9ab-46fe-8bb3-9564e6b87e7e-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:11 crc kubenswrapper[4833]: I0217 13:48:11.152218 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5f77r\" (UniqueName: \"kubernetes.io/projected/a441724f-e9ab-46fe-8bb3-9564e6b87e7e-kube-api-access-5f77r\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:11 crc kubenswrapper[4833]: I0217 13:48:11.152226 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d4005d2-d765-4a8e-9b85-8c49d8238995-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:11 crc kubenswrapper[4833]: I0217 13:48:11.153198 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df0fc04a-005e-4507-91ff-3ed49093754e-config\") pod \"controller-manager-746d77cdc8-fvnzc\" (UID: \"df0fc04a-005e-4507-91ff-3ed49093754e\") " pod="openshift-controller-manager/controller-manager-746d77cdc8-fvnzc" Feb 17 13:48:11 crc kubenswrapper[4833]: I0217 13:48:11.153441 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df0fc04a-005e-4507-91ff-3ed49093754e-client-ca\") pod \"controller-manager-746d77cdc8-fvnzc\" (UID: \"df0fc04a-005e-4507-91ff-3ed49093754e\") " pod="openshift-controller-manager/controller-manager-746d77cdc8-fvnzc" Feb 17 13:48:11 crc kubenswrapper[4833]: I0217 13:48:11.155445 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/df0fc04a-005e-4507-91ff-3ed49093754e-proxy-ca-bundles\") pod \"controller-manager-746d77cdc8-fvnzc\" (UID: \"df0fc04a-005e-4507-91ff-3ed49093754e\") " pod="openshift-controller-manager/controller-manager-746d77cdc8-fvnzc" Feb 17 13:48:11 crc kubenswrapper[4833]: I0217 13:48:11.156189 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df0fc04a-005e-4507-91ff-3ed49093754e-serving-cert\") pod \"controller-manager-746d77cdc8-fvnzc\" (UID: \"df0fc04a-005e-4507-91ff-3ed49093754e\") " pod="openshift-controller-manager/controller-manager-746d77cdc8-fvnzc" Feb 17 13:48:11 crc kubenswrapper[4833]: I0217 13:48:11.166817 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7qt4\" (UniqueName: \"kubernetes.io/projected/df0fc04a-005e-4507-91ff-3ed49093754e-kube-api-access-t7qt4\") pod \"controller-manager-746d77cdc8-fvnzc\" (UID: \"df0fc04a-005e-4507-91ff-3ed49093754e\") " pod="openshift-controller-manager/controller-manager-746d77cdc8-fvnzc" Feb 17 13:48:11 crc kubenswrapper[4833]: I0217 13:48:11.266378 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-746d77cdc8-fvnzc" Feb 17 13:48:11 crc kubenswrapper[4833]: I0217 13:48:11.520111 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-746d77cdc8-fvnzc"] Feb 17 13:48:11 crc kubenswrapper[4833]: I0217 13:48:11.579696 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vnkbw" event={"ID":"7866e11a-9385-4003-9406-d4012097cbb3","Type":"ContainerStarted","Data":"20a8001f7eda9c2f3ebdb24479222359d0251498414c3574a5f6d8058129a1f9"} Feb 17 13:48:11 crc kubenswrapper[4833]: I0217 13:48:11.586640 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5q7l2" event={"ID":"3cf157f5-3e18-491b-a285-a25a7e71b2ff","Type":"ContainerStarted","Data":"057446373a9dbd10533e3447cf7d1eeb76bc69280821b7951ecc7493d3929e18"} Feb 17 13:48:11 crc kubenswrapper[4833]: I0217 13:48:11.588347 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-746d77cdc8-fvnzc" event={"ID":"df0fc04a-005e-4507-91ff-3ed49093754e","Type":"ContainerStarted","Data":"52aedf7f5f215d75f7ad2e614e0c255477f224dfbeeced5fab1e1cf495226c99"} Feb 17 13:48:11 crc kubenswrapper[4833]: I0217 13:48:11.601281 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wgmx8" event={"ID":"5fc2132e-3150-4783-a56a-0bd9f33d4c6c","Type":"ContainerStarted","Data":"465919cbe0a7f71befb6f440356354424973b79733b1feb1e7f19474624e7dea"} Feb 17 13:48:11 crc kubenswrapper[4833]: I0217 13:48:11.604547 4833 generic.go:334] "Generic (PLEG): container finished" podID="811bfe90-4b33-4bfb-969f-63d5dbde1b94" containerID="7a059a56185d576e6d9e7b4b4620c38d75aade037af6a05826270b2f69ff3e07" exitCode=0 Feb 17 13:48:11 crc kubenswrapper[4833]: I0217 13:48:11.604615 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s22hg" event={"ID":"811bfe90-4b33-4bfb-969f-63d5dbde1b94","Type":"ContainerDied","Data":"7a059a56185d576e6d9e7b4b4620c38d75aade037af6a05826270b2f69ff3e07"} Feb 17 13:48:11 crc kubenswrapper[4833]: I0217 13:48:11.611141 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7wzzq" event={"ID":"347afd5b-1c04-4364-87ef-82bf98a454a6","Type":"ContainerStarted","Data":"d455b5d382209909fc57cefffdb0c4bbd5b256b212a1ec9f75ebd52ef9c078c5"} Feb 17 13:48:11 crc kubenswrapper[4833]: I0217 13:48:11.624688 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wrrv5" event={"ID":"96680c84-52a5-4ddc-a676-7a5c71c9f3f6","Type":"ContainerStarted","Data":"9d86cffc8b8a150c3cbae23a12e369502ff3ca9dd5fa64fd7cf98e558a72e37c"} Feb 17 13:48:11 crc kubenswrapper[4833]: I0217 13:48:11.626480 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-btl28" event={"ID":"3d4005d2-d765-4a8e-9b85-8c49d8238995","Type":"ContainerDied","Data":"187041e73037b7a48cfa5b7776cf7077ff1d355df4f480edcb40484aa4bb8b6f"} Feb 17 13:48:11 crc kubenswrapper[4833]: I0217 13:48:11.626539 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-btl28" Feb 17 13:48:11 crc kubenswrapper[4833]: I0217 13:48:11.626546 4833 scope.go:117] "RemoveContainer" containerID="61177d8a937f1a00422520bd062f3b7e3453f76fc63d5b82ada6cfb5329dd2bf" Feb 17 13:48:11 crc kubenswrapper[4833]: I0217 13:48:11.629828 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g2gvf" event={"ID":"a441724f-e9ab-46fe-8bb3-9564e6b87e7e","Type":"ContainerDied","Data":"9114b7cc66bacb852d46e88f2a93447caa749bf138691b43bf887cd0210b8f7a"} Feb 17 13:48:11 crc kubenswrapper[4833]: I0217 13:48:11.629910 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g2gvf" Feb 17 13:48:11 crc kubenswrapper[4833]: I0217 13:48:11.635296 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mdg8m" event={"ID":"0ef940cc-d662-4a1c-aee3-09c28bfac646","Type":"ContainerStarted","Data":"5b021bd72ed4c524030c26eba12b54f72206f0f2ae6d1fd76a3254f8aae97d2d"} Feb 17 13:48:11 crc kubenswrapper[4833]: I0217 13:48:11.638349 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-4b7xf" event={"ID":"892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c","Type":"ContainerStarted","Data":"8bd5c293d86013af3badc8c39ce1a65f32a0f700588c4ad830d86b977e7efa18"} Feb 17 13:48:11 crc kubenswrapper[4833]: I0217 13:48:11.638394 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-4b7xf" event={"ID":"892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c","Type":"ContainerStarted","Data":"62a110855162a88ebd1648f980aa4ac790aae1c1bed03297df8cfa419be0f8ff"} Feb 17 13:48:11 crc kubenswrapper[4833]: I0217 13:48:11.640983 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g6vpd" event={"ID":"73094c1e-9f36-472d-9275-12562b4cd250","Type":"ContainerStarted","Data":"0f625b7a9e80e5880b46701c41477ff7925a1c8972ea5cbf7eb76259b848cd04"} Feb 17 13:48:11 crc kubenswrapper[4833]: I0217 13:48:11.821581 4833 scope.go:117] "RemoveContainer" containerID="0ce9d3bea10b330fff1bfc2e790c8e27d9ae40842b136f4592ab6140e30c71ed" Feb 17 13:48:11 crc kubenswrapper[4833]: I0217 13:48:11.856444 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-btl28"] Feb 17 13:48:11 crc kubenswrapper[4833]: I0217 13:48:11.859760 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-btl28"] Feb 17 13:48:11 crc kubenswrapper[4833]: I0217 13:48:11.879076 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-g2gvf"] Feb 17 13:48:11 crc kubenswrapper[4833]: I0217 13:48:11.879119 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-g2gvf"] Feb 17 13:48:12 crc kubenswrapper[4833]: I0217 13:48:12.458336 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-27vkt" Feb 17 13:48:12 crc kubenswrapper[4833]: I0217 13:48:12.649688 4833 generic.go:334] "Generic (PLEG): container finished" podID="96680c84-52a5-4ddc-a676-7a5c71c9f3f6" containerID="9d86cffc8b8a150c3cbae23a12e369502ff3ca9dd5fa64fd7cf98e558a72e37c" exitCode=0 Feb 17 13:48:12 crc kubenswrapper[4833]: I0217 13:48:12.649789 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wrrv5" event={"ID":"96680c84-52a5-4ddc-a676-7a5c71c9f3f6","Type":"ContainerDied","Data":"9d86cffc8b8a150c3cbae23a12e369502ff3ca9dd5fa64fd7cf98e558a72e37c"} Feb 17 13:48:12 crc kubenswrapper[4833]: I0217 13:48:12.651370 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-746d77cdc8-fvnzc" event={"ID":"df0fc04a-005e-4507-91ff-3ed49093754e","Type":"ContainerStarted","Data":"1549a66c975ba5657ffe2795b2bafebc8641a2638c2b43414c03b351956adb52"} Feb 17 13:48:12 crc kubenswrapper[4833]: I0217 13:48:12.651574 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-746d77cdc8-fvnzc" Feb 17 13:48:12 crc kubenswrapper[4833]: I0217 13:48:12.653406 4833 generic.go:334] "Generic (PLEG): container finished" podID="7866e11a-9385-4003-9406-d4012097cbb3" containerID="20a8001f7eda9c2f3ebdb24479222359d0251498414c3574a5f6d8058129a1f9" exitCode=0 Feb 17 13:48:12 crc kubenswrapper[4833]: I0217 13:48:12.653487 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vnkbw" event={"ID":"7866e11a-9385-4003-9406-d4012097cbb3","Type":"ContainerDied","Data":"20a8001f7eda9c2f3ebdb24479222359d0251498414c3574a5f6d8058129a1f9"} Feb 17 13:48:12 crc kubenswrapper[4833]: I0217 13:48:12.658152 4833 generic.go:334] "Generic (PLEG): container finished" podID="5fc2132e-3150-4783-a56a-0bd9f33d4c6c" containerID="465919cbe0a7f71befb6f440356354424973b79733b1feb1e7f19474624e7dea" exitCode=0 Feb 17 13:48:12 crc kubenswrapper[4833]: I0217 13:48:12.658344 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wgmx8" event={"ID":"5fc2132e-3150-4783-a56a-0bd9f33d4c6c","Type":"ContainerDied","Data":"465919cbe0a7f71befb6f440356354424973b79733b1feb1e7f19474624e7dea"} Feb 17 13:48:12 crc kubenswrapper[4833]: I0217 13:48:12.661556 4833 generic.go:334] "Generic (PLEG): container finished" podID="347afd5b-1c04-4364-87ef-82bf98a454a6" containerID="d455b5d382209909fc57cefffdb0c4bbd5b256b212a1ec9f75ebd52ef9c078c5" exitCode=0 Feb 17 13:48:12 crc kubenswrapper[4833]: I0217 13:48:12.661667 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7wzzq" event={"ID":"347afd5b-1c04-4364-87ef-82bf98a454a6","Type":"ContainerDied","Data":"d455b5d382209909fc57cefffdb0c4bbd5b256b212a1ec9f75ebd52ef9c078c5"} Feb 17 13:48:12 crc kubenswrapper[4833]: I0217 13:48:12.669448 4833 generic.go:334] "Generic (PLEG): container finished" podID="73094c1e-9f36-472d-9275-12562b4cd250" containerID="0f625b7a9e80e5880b46701c41477ff7925a1c8972ea5cbf7eb76259b848cd04" exitCode=0 Feb 17 13:48:12 crc kubenswrapper[4833]: I0217 13:48:12.669523 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g6vpd" event={"ID":"73094c1e-9f36-472d-9275-12562b4cd250","Type":"ContainerDied","Data":"0f625b7a9e80e5880b46701c41477ff7925a1c8972ea5cbf7eb76259b848cd04"} Feb 17 13:48:12 crc kubenswrapper[4833]: I0217 13:48:12.671379 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-746d77cdc8-fvnzc" Feb 17 13:48:12 crc kubenswrapper[4833]: I0217 13:48:12.671514 4833 generic.go:334] "Generic (PLEG): container finished" podID="0ef940cc-d662-4a1c-aee3-09c28bfac646" containerID="5b021bd72ed4c524030c26eba12b54f72206f0f2ae6d1fd76a3254f8aae97d2d" exitCode=0 Feb 17 13:48:12 crc kubenswrapper[4833]: I0217 13:48:12.671566 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mdg8m" event={"ID":"0ef940cc-d662-4a1c-aee3-09c28bfac646","Type":"ContainerDied","Data":"5b021bd72ed4c524030c26eba12b54f72206f0f2ae6d1fd76a3254f8aae97d2d"} Feb 17 13:48:12 crc kubenswrapper[4833]: I0217 13:48:12.673083 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-4b7xf" event={"ID":"892b83d9-c70f-4a07-b7dd-d3f6f3e18b3c","Type":"ContainerStarted","Data":"81b3675f17622499d3768edf70edd5b036e2cf8cf20d3ac42b9890f2e13cf1c7"} Feb 17 13:48:12 crc kubenswrapper[4833]: I0217 13:48:12.674737 4833 generic.go:334] "Generic (PLEG): container finished" podID="3cf157f5-3e18-491b-a285-a25a7e71b2ff" containerID="057446373a9dbd10533e3447cf7d1eeb76bc69280821b7951ecc7493d3929e18" exitCode=0 Feb 17 13:48:12 crc kubenswrapper[4833]: I0217 13:48:12.674776 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5q7l2" event={"ID":"3cf157f5-3e18-491b-a285-a25a7e71b2ff","Type":"ContainerDied","Data":"057446373a9dbd10533e3447cf7d1eeb76bc69280821b7951ecc7493d3929e18"} Feb 17 13:48:12 crc kubenswrapper[4833]: I0217 13:48:12.708930 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-4b7xf" podStartSLOduration=162.708912022 podStartE2EDuration="2m42.708912022s" podCreationTimestamp="2026-02-17 13:45:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:48:12.708218902 +0000 UTC m=+182.343318325" watchObservedRunningTime="2026-02-17 13:48:12.708912022 +0000 UTC m=+182.344011455" Feb 17 13:48:12 crc kubenswrapper[4833]: I0217 13:48:12.737692 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-746d77cdc8-fvnzc" podStartSLOduration=14.737672247 podStartE2EDuration="14.737672247s" podCreationTimestamp="2026-02-17 13:47:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:48:12.733935286 +0000 UTC m=+182.369034739" watchObservedRunningTime="2026-02-17 13:48:12.737672247 +0000 UTC m=+182.372771690" Feb 17 13:48:13 crc kubenswrapper[4833]: I0217 13:48:13.024937 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86d9bb6595-n4r7b"] Feb 17 13:48:13 crc kubenswrapper[4833]: I0217 13:48:13.026527 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86d9bb6595-n4r7b" Feb 17 13:48:13 crc kubenswrapper[4833]: I0217 13:48:13.031467 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 17 13:48:13 crc kubenswrapper[4833]: I0217 13:48:13.031512 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 17 13:48:13 crc kubenswrapper[4833]: I0217 13:48:13.031637 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 17 13:48:13 crc kubenswrapper[4833]: I0217 13:48:13.031846 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 17 13:48:13 crc kubenswrapper[4833]: I0217 13:48:13.032054 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 17 13:48:13 crc kubenswrapper[4833]: I0217 13:48:13.032189 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 17 13:48:13 crc kubenswrapper[4833]: I0217 13:48:13.033719 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86d9bb6595-n4r7b"] Feb 17 13:48:13 crc kubenswrapper[4833]: I0217 13:48:13.068995 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d4005d2-d765-4a8e-9b85-8c49d8238995" path="/var/lib/kubelet/pods/3d4005d2-d765-4a8e-9b85-8c49d8238995/volumes" Feb 17 13:48:13 crc kubenswrapper[4833]: I0217 13:48:13.070190 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a441724f-e9ab-46fe-8bb3-9564e6b87e7e" path="/var/lib/kubelet/pods/a441724f-e9ab-46fe-8bb3-9564e6b87e7e/volumes" Feb 17 13:48:13 crc kubenswrapper[4833]: I0217 13:48:13.177790 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbd2e74a-aacb-4270-b52a-d802f54b4c24-config\") pod \"route-controller-manager-86d9bb6595-n4r7b\" (UID: \"dbd2e74a-aacb-4270-b52a-d802f54b4c24\") " pod="openshift-route-controller-manager/route-controller-manager-86d9bb6595-n4r7b" Feb 17 13:48:13 crc kubenswrapper[4833]: I0217 13:48:13.177935 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nptt\" (UniqueName: \"kubernetes.io/projected/dbd2e74a-aacb-4270-b52a-d802f54b4c24-kube-api-access-6nptt\") pod \"route-controller-manager-86d9bb6595-n4r7b\" (UID: \"dbd2e74a-aacb-4270-b52a-d802f54b4c24\") " pod="openshift-route-controller-manager/route-controller-manager-86d9bb6595-n4r7b" Feb 17 13:48:13 crc kubenswrapper[4833]: I0217 13:48:13.178079 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dbd2e74a-aacb-4270-b52a-d802f54b4c24-serving-cert\") pod \"route-controller-manager-86d9bb6595-n4r7b\" (UID: \"dbd2e74a-aacb-4270-b52a-d802f54b4c24\") " pod="openshift-route-controller-manager/route-controller-manager-86d9bb6595-n4r7b" Feb 17 13:48:13 crc kubenswrapper[4833]: I0217 13:48:13.178143 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dbd2e74a-aacb-4270-b52a-d802f54b4c24-client-ca\") pod \"route-controller-manager-86d9bb6595-n4r7b\" (UID: \"dbd2e74a-aacb-4270-b52a-d802f54b4c24\") " pod="openshift-route-controller-manager/route-controller-manager-86d9bb6595-n4r7b" Feb 17 13:48:13 crc kubenswrapper[4833]: I0217 13:48:13.278845 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbd2e74a-aacb-4270-b52a-d802f54b4c24-config\") pod \"route-controller-manager-86d9bb6595-n4r7b\" (UID: \"dbd2e74a-aacb-4270-b52a-d802f54b4c24\") " pod="openshift-route-controller-manager/route-controller-manager-86d9bb6595-n4r7b" Feb 17 13:48:13 crc kubenswrapper[4833]: I0217 13:48:13.278905 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nptt\" (UniqueName: \"kubernetes.io/projected/dbd2e74a-aacb-4270-b52a-d802f54b4c24-kube-api-access-6nptt\") pod \"route-controller-manager-86d9bb6595-n4r7b\" (UID: \"dbd2e74a-aacb-4270-b52a-d802f54b4c24\") " pod="openshift-route-controller-manager/route-controller-manager-86d9bb6595-n4r7b" Feb 17 13:48:13 crc kubenswrapper[4833]: I0217 13:48:13.278975 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dbd2e74a-aacb-4270-b52a-d802f54b4c24-serving-cert\") pod \"route-controller-manager-86d9bb6595-n4r7b\" (UID: \"dbd2e74a-aacb-4270-b52a-d802f54b4c24\") " pod="openshift-route-controller-manager/route-controller-manager-86d9bb6595-n4r7b" Feb 17 13:48:13 crc kubenswrapper[4833]: I0217 13:48:13.279021 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dbd2e74a-aacb-4270-b52a-d802f54b4c24-client-ca\") pod \"route-controller-manager-86d9bb6595-n4r7b\" (UID: \"dbd2e74a-aacb-4270-b52a-d802f54b4c24\") " pod="openshift-route-controller-manager/route-controller-manager-86d9bb6595-n4r7b" Feb 17 13:48:13 crc kubenswrapper[4833]: I0217 13:48:13.280184 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dbd2e74a-aacb-4270-b52a-d802f54b4c24-client-ca\") pod \"route-controller-manager-86d9bb6595-n4r7b\" (UID: \"dbd2e74a-aacb-4270-b52a-d802f54b4c24\") " pod="openshift-route-controller-manager/route-controller-manager-86d9bb6595-n4r7b" Feb 17 13:48:13 crc kubenswrapper[4833]: I0217 13:48:13.281884 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbd2e74a-aacb-4270-b52a-d802f54b4c24-config\") pod \"route-controller-manager-86d9bb6595-n4r7b\" (UID: \"dbd2e74a-aacb-4270-b52a-d802f54b4c24\") " pod="openshift-route-controller-manager/route-controller-manager-86d9bb6595-n4r7b" Feb 17 13:48:13 crc kubenswrapper[4833]: I0217 13:48:13.292825 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dbd2e74a-aacb-4270-b52a-d802f54b4c24-serving-cert\") pod \"route-controller-manager-86d9bb6595-n4r7b\" (UID: \"dbd2e74a-aacb-4270-b52a-d802f54b4c24\") " pod="openshift-route-controller-manager/route-controller-manager-86d9bb6595-n4r7b" Feb 17 13:48:13 crc kubenswrapper[4833]: I0217 13:48:13.302289 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nptt\" (UniqueName: \"kubernetes.io/projected/dbd2e74a-aacb-4270-b52a-d802f54b4c24-kube-api-access-6nptt\") pod \"route-controller-manager-86d9bb6595-n4r7b\" (UID: \"dbd2e74a-aacb-4270-b52a-d802f54b4c24\") " pod="openshift-route-controller-manager/route-controller-manager-86d9bb6595-n4r7b" Feb 17 13:48:13 crc kubenswrapper[4833]: I0217 13:48:13.356385 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86d9bb6595-n4r7b" Feb 17 13:48:14 crc kubenswrapper[4833]: I0217 13:48:14.244015 4833 patch_prober.go:28] interesting pod/machine-config-daemon-nmzvl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 13:48:14 crc kubenswrapper[4833]: I0217 13:48:14.244369 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" podUID="f4a1ca83-1919-4f9c-82de-c849cbd50e70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 13:48:14 crc kubenswrapper[4833]: I0217 13:48:14.430525 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86d9bb6595-n4r7b"] Feb 17 13:48:14 crc kubenswrapper[4833]: W0217 13:48:14.440761 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddbd2e74a_aacb_4270_b52a_d802f54b4c24.slice/crio-912904c0ecd004a4331ee223f1bdcaec906047b41059af3590e98af97ba80bb0 WatchSource:0}: Error finding container 912904c0ecd004a4331ee223f1bdcaec906047b41059af3590e98af97ba80bb0: Status 404 returned error can't find the container with id 912904c0ecd004a4331ee223f1bdcaec906047b41059af3590e98af97ba80bb0 Feb 17 13:48:14 crc kubenswrapper[4833]: I0217 13:48:14.686309 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86d9bb6595-n4r7b" event={"ID":"dbd2e74a-aacb-4270-b52a-d802f54b4c24","Type":"ContainerStarted","Data":"912904c0ecd004a4331ee223f1bdcaec906047b41059af3590e98af97ba80bb0"} Feb 17 13:48:14 crc kubenswrapper[4833]: I0217 13:48:14.688938 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5q7l2" event={"ID":"3cf157f5-3e18-491b-a285-a25a7e71b2ff","Type":"ContainerStarted","Data":"272ff41451e44eb2ec40da1c9b64ebc61e7435315a24555d9e15e946e61f2023"} Feb 17 13:48:14 crc kubenswrapper[4833]: I0217 13:48:14.702950 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5q7l2" podStartSLOduration=2.630226218 podStartE2EDuration="36.702932058s" podCreationTimestamp="2026-02-17 13:47:38 +0000 UTC" firstStartedPulling="2026-02-17 13:47:39.982268687 +0000 UTC m=+149.617368120" lastFinishedPulling="2026-02-17 13:48:14.054974527 +0000 UTC m=+183.690073960" observedRunningTime="2026-02-17 13:48:14.702486005 +0000 UTC m=+184.337585458" watchObservedRunningTime="2026-02-17 13:48:14.702932058 +0000 UTC m=+184.338031491" Feb 17 13:48:15 crc kubenswrapper[4833]: I0217 13:48:15.696187 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mdg8m" event={"ID":"0ef940cc-d662-4a1c-aee3-09c28bfac646","Type":"ContainerStarted","Data":"f5e2964303cf3ff7ba5a174dfb36ea99ca59f3caf5434c794b261631faf10422"} Feb 17 13:48:15 crc kubenswrapper[4833]: I0217 13:48:15.697990 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86d9bb6595-n4r7b" event={"ID":"dbd2e74a-aacb-4270-b52a-d802f54b4c24","Type":"ContainerStarted","Data":"4aa338daf12f53a8b6a98044384431a50ab11ce2d85ea4cd9b45374fff7fa8b7"} Feb 17 13:48:15 crc kubenswrapper[4833]: I0217 13:48:15.698279 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-86d9bb6595-n4r7b" Feb 17 13:48:15 crc kubenswrapper[4833]: I0217 13:48:15.705283 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-86d9bb6595-n4r7b" Feb 17 13:48:15 crc kubenswrapper[4833]: I0217 13:48:15.719483 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mdg8m" podStartSLOduration=2.703836001 podStartE2EDuration="35.719465017s" podCreationTimestamp="2026-02-17 13:47:40 +0000 UTC" firstStartedPulling="2026-02-17 13:47:42.188298025 +0000 UTC m=+151.823397448" lastFinishedPulling="2026-02-17 13:48:15.203927031 +0000 UTC m=+184.839026464" observedRunningTime="2026-02-17 13:48:15.715329714 +0000 UTC m=+185.350429167" watchObservedRunningTime="2026-02-17 13:48:15.719465017 +0000 UTC m=+185.354564440" Feb 17 13:48:16 crc kubenswrapper[4833]: I0217 13:48:16.706079 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s22hg" event={"ID":"811bfe90-4b33-4bfb-969f-63d5dbde1b94","Type":"ContainerStarted","Data":"73c03caad43fe8904f9819a2ad9753845e265d6a7fbd4c044d53d8a6bbbcc92e"} Feb 17 13:48:16 crc kubenswrapper[4833]: I0217 13:48:16.735981 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-86d9bb6595-n4r7b" podStartSLOduration=18.735964054 podStartE2EDuration="18.735964054s" podCreationTimestamp="2026-02-17 13:47:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:48:15.741649946 +0000 UTC m=+185.376749379" watchObservedRunningTime="2026-02-17 13:48:16.735964054 +0000 UTC m=+186.371063487" Feb 17 13:48:17 crc kubenswrapper[4833]: I0217 13:48:17.716214 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g6vpd" event={"ID":"73094c1e-9f36-472d-9275-12562b4cd250","Type":"ContainerStarted","Data":"44269ffe950548083987f2b9d021910aa03303c09c6856a589d488fd5883bc10"} Feb 17 13:48:17 crc kubenswrapper[4833]: I0217 13:48:17.744520 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-s22hg" podStartSLOduration=4.138749587 podStartE2EDuration="37.744499955s" podCreationTimestamp="2026-02-17 13:47:40 +0000 UTC" firstStartedPulling="2026-02-17 13:47:42.181857254 +0000 UTC m=+151.816956687" lastFinishedPulling="2026-02-17 13:48:15.787607622 +0000 UTC m=+185.422707055" observedRunningTime="2026-02-17 13:48:16.741692085 +0000 UTC m=+186.376791518" watchObservedRunningTime="2026-02-17 13:48:17.744499955 +0000 UTC m=+187.379599388" Feb 17 13:48:17 crc kubenswrapper[4833]: I0217 13:48:17.745141 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-g6vpd" podStartSLOduration=3.892634097 podStartE2EDuration="39.745134404s" podCreationTimestamp="2026-02-17 13:47:38 +0000 UTC" firstStartedPulling="2026-02-17 13:47:41.008388171 +0000 UTC m=+150.643487604" lastFinishedPulling="2026-02-17 13:48:16.860888478 +0000 UTC m=+186.495987911" observedRunningTime="2026-02-17 13:48:17.743764153 +0000 UTC m=+187.378863586" watchObservedRunningTime="2026-02-17 13:48:17.745134404 +0000 UTC m=+187.380233837" Feb 17 13:48:18 crc kubenswrapper[4833]: I0217 13:48:18.291161 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:48:18 crc kubenswrapper[4833]: I0217 13:48:18.597861 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5q7l2" Feb 17 13:48:18 crc kubenswrapper[4833]: I0217 13:48:18.597913 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5q7l2" Feb 17 13:48:18 crc kubenswrapper[4833]: I0217 13:48:18.712962 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-746d77cdc8-fvnzc"] Feb 17 13:48:18 crc kubenswrapper[4833]: I0217 13:48:18.713190 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-746d77cdc8-fvnzc" podUID="df0fc04a-005e-4507-91ff-3ed49093754e" containerName="controller-manager" containerID="cri-o://1549a66c975ba5657ffe2795b2bafebc8641a2638c2b43414c03b351956adb52" gracePeriod=30 Feb 17 13:48:18 crc kubenswrapper[4833]: I0217 13:48:18.723362 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vnkbw" event={"ID":"7866e11a-9385-4003-9406-d4012097cbb3","Type":"ContainerStarted","Data":"a7ad310c2978ae8527ee6f0a7b07ea9c14d963721633e0b6126eda824d67fad1"} Feb 17 13:48:18 crc kubenswrapper[4833]: I0217 13:48:18.725191 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wgmx8" event={"ID":"5fc2132e-3150-4783-a56a-0bd9f33d4c6c","Type":"ContainerStarted","Data":"dba5f603c65d2271744e28a1a39c0c96ec73004382e95ecac27662b2056a3f6b"} Feb 17 13:48:18 crc kubenswrapper[4833]: I0217 13:48:18.761282 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vnkbw" podStartSLOduration=3.566358028 podStartE2EDuration="40.76126334s" podCreationTimestamp="2026-02-17 13:47:38 +0000 UTC" firstStartedPulling="2026-02-17 13:47:41.002557508 +0000 UTC m=+150.637656941" lastFinishedPulling="2026-02-17 13:48:18.19746282 +0000 UTC m=+187.832562253" observedRunningTime="2026-02-17 13:48:18.761141026 +0000 UTC m=+188.396240459" watchObservedRunningTime="2026-02-17 13:48:18.76126334 +0000 UTC m=+188.396362773" Feb 17 13:48:18 crc kubenswrapper[4833]: I0217 13:48:18.786152 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wgmx8" podStartSLOduration=3.26236878 podStartE2EDuration="37.786133729s" podCreationTimestamp="2026-02-17 13:47:41 +0000 UTC" firstStartedPulling="2026-02-17 13:47:43.255765918 +0000 UTC m=+152.890865351" lastFinishedPulling="2026-02-17 13:48:17.779530867 +0000 UTC m=+187.414630300" observedRunningTime="2026-02-17 13:48:18.784000966 +0000 UTC m=+188.419100409" watchObservedRunningTime="2026-02-17 13:48:18.786133729 +0000 UTC m=+188.421233152" Feb 17 13:48:18 crc kubenswrapper[4833]: I0217 13:48:18.817089 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86d9bb6595-n4r7b"] Feb 17 13:48:18 crc kubenswrapper[4833]: I0217 13:48:18.817282 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-86d9bb6595-n4r7b" podUID="dbd2e74a-aacb-4270-b52a-d802f54b4c24" containerName="route-controller-manager" containerID="cri-o://4aa338daf12f53a8b6a98044384431a50ab11ce2d85ea4cd9b45374fff7fa8b7" gracePeriod=30 Feb 17 13:48:18 crc kubenswrapper[4833]: I0217 13:48:18.835992 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vnkbw" Feb 17 13:48:18 crc kubenswrapper[4833]: I0217 13:48:18.836030 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vnkbw" Feb 17 13:48:19 crc kubenswrapper[4833]: I0217 13:48:19.018133 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-g6vpd" Feb 17 13:48:19 crc kubenswrapper[4833]: I0217 13:48:19.018181 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-g6vpd" Feb 17 13:48:19 crc kubenswrapper[4833]: I0217 13:48:19.737858 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7wzzq" event={"ID":"347afd5b-1c04-4364-87ef-82bf98a454a6","Type":"ContainerStarted","Data":"f603e968e735f573c8b472dba70d83d5f689d9b72f10d70d45e44c642fad7584"} Feb 17 13:48:19 crc kubenswrapper[4833]: I0217 13:48:19.740153 4833 generic.go:334] "Generic (PLEG): container finished" podID="dbd2e74a-aacb-4270-b52a-d802f54b4c24" containerID="4aa338daf12f53a8b6a98044384431a50ab11ce2d85ea4cd9b45374fff7fa8b7" exitCode=0 Feb 17 13:48:19 crc kubenswrapper[4833]: I0217 13:48:19.740234 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86d9bb6595-n4r7b" event={"ID":"dbd2e74a-aacb-4270-b52a-d802f54b4c24","Type":"ContainerDied","Data":"4aa338daf12f53a8b6a98044384431a50ab11ce2d85ea4cd9b45374fff7fa8b7"} Feb 17 13:48:19 crc kubenswrapper[4833]: I0217 13:48:19.746622 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wrrv5" event={"ID":"96680c84-52a5-4ddc-a676-7a5c71c9f3f6","Type":"ContainerStarted","Data":"83dc4755f0c27cc77086682434a7b7710a03e8b46f7fd67b78d51b3ae7781606"} Feb 17 13:48:19 crc kubenswrapper[4833]: I0217 13:48:19.748385 4833 generic.go:334] "Generic (PLEG): container finished" podID="df0fc04a-005e-4507-91ff-3ed49093754e" containerID="1549a66c975ba5657ffe2795b2bafebc8641a2638c2b43414c03b351956adb52" exitCode=0 Feb 17 13:48:19 crc kubenswrapper[4833]: I0217 13:48:19.748743 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-746d77cdc8-fvnzc" event={"ID":"df0fc04a-005e-4507-91ff-3ed49093754e","Type":"ContainerDied","Data":"1549a66c975ba5657ffe2795b2bafebc8641a2638c2b43414c03b351956adb52"} Feb 17 13:48:19 crc kubenswrapper[4833]: I0217 13:48:19.786776 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7wzzq" podStartSLOduration=1.9633995039999999 podStartE2EDuration="37.786753605s" podCreationTimestamp="2026-02-17 13:47:42 +0000 UTC" firstStartedPulling="2026-02-17 13:47:43.263829528 +0000 UTC m=+152.898928951" lastFinishedPulling="2026-02-17 13:48:19.087183619 +0000 UTC m=+188.722283052" observedRunningTime="2026-02-17 13:48:19.75831906 +0000 UTC m=+189.393418503" watchObservedRunningTime="2026-02-17 13:48:19.786753605 +0000 UTC m=+189.421853038" Feb 17 13:48:19 crc kubenswrapper[4833]: I0217 13:48:19.787143 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wrrv5" podStartSLOduration=3.651392966 podStartE2EDuration="41.787137576s" podCreationTimestamp="2026-02-17 13:47:38 +0000 UTC" firstStartedPulling="2026-02-17 13:47:41.013128232 +0000 UTC m=+150.648227665" lastFinishedPulling="2026-02-17 13:48:19.148872842 +0000 UTC m=+188.783972275" observedRunningTime="2026-02-17 13:48:19.782113527 +0000 UTC m=+189.417212970" watchObservedRunningTime="2026-02-17 13:48:19.787137576 +0000 UTC m=+189.422236999" Feb 17 13:48:19 crc kubenswrapper[4833]: I0217 13:48:19.788975 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-746d77cdc8-fvnzc" Feb 17 13:48:19 crc kubenswrapper[4833]: I0217 13:48:19.815196 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-77c6f5cf88-4g5jk"] Feb 17 13:48:19 crc kubenswrapper[4833]: E0217 13:48:19.815484 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df0fc04a-005e-4507-91ff-3ed49093754e" containerName="controller-manager" Feb 17 13:48:19 crc kubenswrapper[4833]: I0217 13:48:19.815507 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="df0fc04a-005e-4507-91ff-3ed49093754e" containerName="controller-manager" Feb 17 13:48:19 crc kubenswrapper[4833]: I0217 13:48:19.815632 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="df0fc04a-005e-4507-91ff-3ed49093754e" containerName="controller-manager" Feb 17 13:48:19 crc kubenswrapper[4833]: I0217 13:48:19.816098 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77c6f5cf88-4g5jk" Feb 17 13:48:19 crc kubenswrapper[4833]: I0217 13:48:19.827916 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-77c6f5cf88-4g5jk"] Feb 17 13:48:19 crc kubenswrapper[4833]: I0217 13:48:19.863072 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86d9bb6595-n4r7b" Feb 17 13:48:19 crc kubenswrapper[4833]: I0217 13:48:19.970643 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6nptt\" (UniqueName: \"kubernetes.io/projected/dbd2e74a-aacb-4270-b52a-d802f54b4c24-kube-api-access-6nptt\") pod \"dbd2e74a-aacb-4270-b52a-d802f54b4c24\" (UID: \"dbd2e74a-aacb-4270-b52a-d802f54b4c24\") " Feb 17 13:48:19 crc kubenswrapper[4833]: I0217 13:48:19.970732 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dbd2e74a-aacb-4270-b52a-d802f54b4c24-serving-cert\") pod \"dbd2e74a-aacb-4270-b52a-d802f54b4c24\" (UID: \"dbd2e74a-aacb-4270-b52a-d802f54b4c24\") " Feb 17 13:48:19 crc kubenswrapper[4833]: I0217 13:48:19.970771 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/df0fc04a-005e-4507-91ff-3ed49093754e-proxy-ca-bundles\") pod \"df0fc04a-005e-4507-91ff-3ed49093754e\" (UID: \"df0fc04a-005e-4507-91ff-3ed49093754e\") " Feb 17 13:48:19 crc kubenswrapper[4833]: I0217 13:48:19.970791 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df0fc04a-005e-4507-91ff-3ed49093754e-config\") pod \"df0fc04a-005e-4507-91ff-3ed49093754e\" (UID: \"df0fc04a-005e-4507-91ff-3ed49093754e\") " Feb 17 13:48:19 crc kubenswrapper[4833]: I0217 13:48:19.970808 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df0fc04a-005e-4507-91ff-3ed49093754e-client-ca\") pod \"df0fc04a-005e-4507-91ff-3ed49093754e\" (UID: \"df0fc04a-005e-4507-91ff-3ed49093754e\") " Feb 17 13:48:19 crc kubenswrapper[4833]: I0217 13:48:19.970828 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df0fc04a-005e-4507-91ff-3ed49093754e-serving-cert\") pod \"df0fc04a-005e-4507-91ff-3ed49093754e\" (UID: \"df0fc04a-005e-4507-91ff-3ed49093754e\") " Feb 17 13:48:19 crc kubenswrapper[4833]: I0217 13:48:19.970843 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbd2e74a-aacb-4270-b52a-d802f54b4c24-config\") pod \"dbd2e74a-aacb-4270-b52a-d802f54b4c24\" (UID: \"dbd2e74a-aacb-4270-b52a-d802f54b4c24\") " Feb 17 13:48:19 crc kubenswrapper[4833]: I0217 13:48:19.970878 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7qt4\" (UniqueName: \"kubernetes.io/projected/df0fc04a-005e-4507-91ff-3ed49093754e-kube-api-access-t7qt4\") pod \"df0fc04a-005e-4507-91ff-3ed49093754e\" (UID: \"df0fc04a-005e-4507-91ff-3ed49093754e\") " Feb 17 13:48:19 crc kubenswrapper[4833]: I0217 13:48:19.970921 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dbd2e74a-aacb-4270-b52a-d802f54b4c24-client-ca\") pod \"dbd2e74a-aacb-4270-b52a-d802f54b4c24\" (UID: \"dbd2e74a-aacb-4270-b52a-d802f54b4c24\") " Feb 17 13:48:19 crc kubenswrapper[4833]: I0217 13:48:19.971121 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nd9xq\" (UniqueName: \"kubernetes.io/projected/41127460-87b7-47b4-81c2-cebc7667e185-kube-api-access-nd9xq\") pod \"controller-manager-77c6f5cf88-4g5jk\" (UID: \"41127460-87b7-47b4-81c2-cebc7667e185\") " pod="openshift-controller-manager/controller-manager-77c6f5cf88-4g5jk" Feb 17 13:48:19 crc kubenswrapper[4833]: I0217 13:48:19.971146 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41127460-87b7-47b4-81c2-cebc7667e185-serving-cert\") pod \"controller-manager-77c6f5cf88-4g5jk\" (UID: \"41127460-87b7-47b4-81c2-cebc7667e185\") " pod="openshift-controller-manager/controller-manager-77c6f5cf88-4g5jk" Feb 17 13:48:19 crc kubenswrapper[4833]: I0217 13:48:19.971175 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41127460-87b7-47b4-81c2-cebc7667e185-config\") pod \"controller-manager-77c6f5cf88-4g5jk\" (UID: \"41127460-87b7-47b4-81c2-cebc7667e185\") " pod="openshift-controller-manager/controller-manager-77c6f5cf88-4g5jk" Feb 17 13:48:19 crc kubenswrapper[4833]: I0217 13:48:19.971219 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41127460-87b7-47b4-81c2-cebc7667e185-client-ca\") pod \"controller-manager-77c6f5cf88-4g5jk\" (UID: \"41127460-87b7-47b4-81c2-cebc7667e185\") " pod="openshift-controller-manager/controller-manager-77c6f5cf88-4g5jk" Feb 17 13:48:19 crc kubenswrapper[4833]: I0217 13:48:19.971262 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/41127460-87b7-47b4-81c2-cebc7667e185-proxy-ca-bundles\") pod \"controller-manager-77c6f5cf88-4g5jk\" (UID: \"41127460-87b7-47b4-81c2-cebc7667e185\") " pod="openshift-controller-manager/controller-manager-77c6f5cf88-4g5jk" Feb 17 13:48:19 crc kubenswrapper[4833]: I0217 13:48:19.971527 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df0fc04a-005e-4507-91ff-3ed49093754e-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "df0fc04a-005e-4507-91ff-3ed49093754e" (UID: "df0fc04a-005e-4507-91ff-3ed49093754e"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:48:19 crc kubenswrapper[4833]: I0217 13:48:19.971678 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbd2e74a-aacb-4270-b52a-d802f54b4c24-config" (OuterVolumeSpecName: "config") pod "dbd2e74a-aacb-4270-b52a-d802f54b4c24" (UID: "dbd2e74a-aacb-4270-b52a-d802f54b4c24"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:48:19 crc kubenswrapper[4833]: I0217 13:48:19.971676 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df0fc04a-005e-4507-91ff-3ed49093754e-config" (OuterVolumeSpecName: "config") pod "df0fc04a-005e-4507-91ff-3ed49093754e" (UID: "df0fc04a-005e-4507-91ff-3ed49093754e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:48:19 crc kubenswrapper[4833]: I0217 13:48:19.971695 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbd2e74a-aacb-4270-b52a-d802f54b4c24-client-ca" (OuterVolumeSpecName: "client-ca") pod "dbd2e74a-aacb-4270-b52a-d802f54b4c24" (UID: "dbd2e74a-aacb-4270-b52a-d802f54b4c24"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:48:19 crc kubenswrapper[4833]: I0217 13:48:19.973020 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df0fc04a-005e-4507-91ff-3ed49093754e-client-ca" (OuterVolumeSpecName: "client-ca") pod "df0fc04a-005e-4507-91ff-3ed49093754e" (UID: "df0fc04a-005e-4507-91ff-3ed49093754e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:48:19 crc kubenswrapper[4833]: I0217 13:48:19.975574 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbd2e74a-aacb-4270-b52a-d802f54b4c24-kube-api-access-6nptt" (OuterVolumeSpecName: "kube-api-access-6nptt") pod "dbd2e74a-aacb-4270-b52a-d802f54b4c24" (UID: "dbd2e74a-aacb-4270-b52a-d802f54b4c24"). InnerVolumeSpecName "kube-api-access-6nptt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:48:19 crc kubenswrapper[4833]: I0217 13:48:19.975703 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbd2e74a-aacb-4270-b52a-d802f54b4c24-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dbd2e74a-aacb-4270-b52a-d802f54b4c24" (UID: "dbd2e74a-aacb-4270-b52a-d802f54b4c24"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:48:19 crc kubenswrapper[4833]: I0217 13:48:19.975740 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df0fc04a-005e-4507-91ff-3ed49093754e-kube-api-access-t7qt4" (OuterVolumeSpecName: "kube-api-access-t7qt4") pod "df0fc04a-005e-4507-91ff-3ed49093754e" (UID: "df0fc04a-005e-4507-91ff-3ed49093754e"). InnerVolumeSpecName "kube-api-access-t7qt4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:48:19 crc kubenswrapper[4833]: I0217 13:48:19.992160 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df0fc04a-005e-4507-91ff-3ed49093754e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "df0fc04a-005e-4507-91ff-3ed49093754e" (UID: "df0fc04a-005e-4507-91ff-3ed49093754e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:48:20 crc kubenswrapper[4833]: I0217 13:48:20.024835 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bb5547dcd-srn8j"] Feb 17 13:48:20 crc kubenswrapper[4833]: E0217 13:48:20.025069 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbd2e74a-aacb-4270-b52a-d802f54b4c24" containerName="route-controller-manager" Feb 17 13:48:20 crc kubenswrapper[4833]: I0217 13:48:20.025086 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbd2e74a-aacb-4270-b52a-d802f54b4c24" containerName="route-controller-manager" Feb 17 13:48:20 crc kubenswrapper[4833]: I0217 13:48:20.025192 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbd2e74a-aacb-4270-b52a-d802f54b4c24" containerName="route-controller-manager" Feb 17 13:48:20 crc kubenswrapper[4833]: I0217 13:48:20.025555 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bb5547dcd-srn8j" Feb 17 13:48:20 crc kubenswrapper[4833]: I0217 13:48:20.036065 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bb5547dcd-srn8j"] Feb 17 13:48:20 crc kubenswrapper[4833]: I0217 13:48:20.072979 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nd9xq\" (UniqueName: \"kubernetes.io/projected/41127460-87b7-47b4-81c2-cebc7667e185-kube-api-access-nd9xq\") pod \"controller-manager-77c6f5cf88-4g5jk\" (UID: \"41127460-87b7-47b4-81c2-cebc7667e185\") " pod="openshift-controller-manager/controller-manager-77c6f5cf88-4g5jk" Feb 17 13:48:20 crc kubenswrapper[4833]: I0217 13:48:20.073024 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41127460-87b7-47b4-81c2-cebc7667e185-serving-cert\") pod \"controller-manager-77c6f5cf88-4g5jk\" (UID: \"41127460-87b7-47b4-81c2-cebc7667e185\") " pod="openshift-controller-manager/controller-manager-77c6f5cf88-4g5jk" Feb 17 13:48:20 crc kubenswrapper[4833]: I0217 13:48:20.073070 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41127460-87b7-47b4-81c2-cebc7667e185-config\") pod \"controller-manager-77c6f5cf88-4g5jk\" (UID: \"41127460-87b7-47b4-81c2-cebc7667e185\") " pod="openshift-controller-manager/controller-manager-77c6f5cf88-4g5jk" Feb 17 13:48:20 crc kubenswrapper[4833]: I0217 13:48:20.073095 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41127460-87b7-47b4-81c2-cebc7667e185-client-ca\") pod \"controller-manager-77c6f5cf88-4g5jk\" (UID: \"41127460-87b7-47b4-81c2-cebc7667e185\") " pod="openshift-controller-manager/controller-manager-77c6f5cf88-4g5jk" Feb 17 13:48:20 crc kubenswrapper[4833]: I0217 13:48:20.073126 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/41127460-87b7-47b4-81c2-cebc7667e185-proxy-ca-bundles\") pod \"controller-manager-77c6f5cf88-4g5jk\" (UID: \"41127460-87b7-47b4-81c2-cebc7667e185\") " pod="openshift-controller-manager/controller-manager-77c6f5cf88-4g5jk" Feb 17 13:48:20 crc kubenswrapper[4833]: I0217 13:48:20.073184 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6nptt\" (UniqueName: \"kubernetes.io/projected/dbd2e74a-aacb-4270-b52a-d802f54b4c24-kube-api-access-6nptt\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:20 crc kubenswrapper[4833]: I0217 13:48:20.073195 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dbd2e74a-aacb-4270-b52a-d802f54b4c24-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:20 crc kubenswrapper[4833]: I0217 13:48:20.073205 4833 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/df0fc04a-005e-4507-91ff-3ed49093754e-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:20 crc kubenswrapper[4833]: I0217 13:48:20.073213 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df0fc04a-005e-4507-91ff-3ed49093754e-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:20 crc kubenswrapper[4833]: I0217 13:48:20.073223 4833 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df0fc04a-005e-4507-91ff-3ed49093754e-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:20 crc kubenswrapper[4833]: I0217 13:48:20.073231 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df0fc04a-005e-4507-91ff-3ed49093754e-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:20 crc kubenswrapper[4833]: I0217 13:48:20.073240 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbd2e74a-aacb-4270-b52a-d802f54b4c24-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:20 crc kubenswrapper[4833]: I0217 13:48:20.073248 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7qt4\" (UniqueName: \"kubernetes.io/projected/df0fc04a-005e-4507-91ff-3ed49093754e-kube-api-access-t7qt4\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:20 crc kubenswrapper[4833]: I0217 13:48:20.073258 4833 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dbd2e74a-aacb-4270-b52a-d802f54b4c24-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:20 crc kubenswrapper[4833]: I0217 13:48:20.074670 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/41127460-87b7-47b4-81c2-cebc7667e185-proxy-ca-bundles\") pod \"controller-manager-77c6f5cf88-4g5jk\" (UID: \"41127460-87b7-47b4-81c2-cebc7667e185\") " pod="openshift-controller-manager/controller-manager-77c6f5cf88-4g5jk" Feb 17 13:48:20 crc kubenswrapper[4833]: I0217 13:48:20.074968 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41127460-87b7-47b4-81c2-cebc7667e185-config\") pod \"controller-manager-77c6f5cf88-4g5jk\" (UID: \"41127460-87b7-47b4-81c2-cebc7667e185\") " pod="openshift-controller-manager/controller-manager-77c6f5cf88-4g5jk" Feb 17 13:48:20 crc kubenswrapper[4833]: I0217 13:48:20.075266 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41127460-87b7-47b4-81c2-cebc7667e185-client-ca\") pod \"controller-manager-77c6f5cf88-4g5jk\" (UID: \"41127460-87b7-47b4-81c2-cebc7667e185\") " pod="openshift-controller-manager/controller-manager-77c6f5cf88-4g5jk" Feb 17 13:48:20 crc kubenswrapper[4833]: I0217 13:48:20.078237 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41127460-87b7-47b4-81c2-cebc7667e185-serving-cert\") pod \"controller-manager-77c6f5cf88-4g5jk\" (UID: \"41127460-87b7-47b4-81c2-cebc7667e185\") " pod="openshift-controller-manager/controller-manager-77c6f5cf88-4g5jk" Feb 17 13:48:20 crc kubenswrapper[4833]: I0217 13:48:20.089506 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nd9xq\" (UniqueName: \"kubernetes.io/projected/41127460-87b7-47b4-81c2-cebc7667e185-kube-api-access-nd9xq\") pod \"controller-manager-77c6f5cf88-4g5jk\" (UID: \"41127460-87b7-47b4-81c2-cebc7667e185\") " pod="openshift-controller-manager/controller-manager-77c6f5cf88-4g5jk" Feb 17 13:48:20 crc kubenswrapper[4833]: I0217 13:48:20.113526 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-g6vpd" podUID="73094c1e-9f36-472d-9275-12562b4cd250" containerName="registry-server" probeResult="failure" output=< Feb 17 13:48:20 crc kubenswrapper[4833]: timeout: failed to connect service ":50051" within 1s Feb 17 13:48:20 crc kubenswrapper[4833]: > Feb 17 13:48:20 crc kubenswrapper[4833]: I0217 13:48:20.113576 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-5q7l2" podUID="3cf157f5-3e18-491b-a285-a25a7e71b2ff" containerName="registry-server" probeResult="failure" output=< Feb 17 13:48:20 crc kubenswrapper[4833]: timeout: failed to connect service ":50051" within 1s Feb 17 13:48:20 crc kubenswrapper[4833]: > Feb 17 13:48:20 crc kubenswrapper[4833]: I0217 13:48:20.113619 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-vnkbw" podUID="7866e11a-9385-4003-9406-d4012097cbb3" containerName="registry-server" probeResult="failure" output=< Feb 17 13:48:20 crc kubenswrapper[4833]: timeout: failed to connect service ":50051" within 1s Feb 17 13:48:20 crc kubenswrapper[4833]: > Feb 17 13:48:20 crc kubenswrapper[4833]: I0217 13:48:20.161939 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77c6f5cf88-4g5jk" Feb 17 13:48:20 crc kubenswrapper[4833]: I0217 13:48:20.174005 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/60a2c4e5-b6e2-4c5c-b6fd-772e6ce28858-serving-cert\") pod \"route-controller-manager-6bb5547dcd-srn8j\" (UID: \"60a2c4e5-b6e2-4c5c-b6fd-772e6ce28858\") " pod="openshift-route-controller-manager/route-controller-manager-6bb5547dcd-srn8j" Feb 17 13:48:20 crc kubenswrapper[4833]: I0217 13:48:20.174083 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csd7c\" (UniqueName: \"kubernetes.io/projected/60a2c4e5-b6e2-4c5c-b6fd-772e6ce28858-kube-api-access-csd7c\") pod \"route-controller-manager-6bb5547dcd-srn8j\" (UID: \"60a2c4e5-b6e2-4c5c-b6fd-772e6ce28858\") " pod="openshift-route-controller-manager/route-controller-manager-6bb5547dcd-srn8j" Feb 17 13:48:20 crc kubenswrapper[4833]: I0217 13:48:20.174109 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/60a2c4e5-b6e2-4c5c-b6fd-772e6ce28858-client-ca\") pod \"route-controller-manager-6bb5547dcd-srn8j\" (UID: \"60a2c4e5-b6e2-4c5c-b6fd-772e6ce28858\") " pod="openshift-route-controller-manager/route-controller-manager-6bb5547dcd-srn8j" Feb 17 13:48:20 crc kubenswrapper[4833]: I0217 13:48:20.174133 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60a2c4e5-b6e2-4c5c-b6fd-772e6ce28858-config\") pod \"route-controller-manager-6bb5547dcd-srn8j\" (UID: \"60a2c4e5-b6e2-4c5c-b6fd-772e6ce28858\") " pod="openshift-route-controller-manager/route-controller-manager-6bb5547dcd-srn8j" Feb 17 13:48:20 crc kubenswrapper[4833]: I0217 13:48:20.275403 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/60a2c4e5-b6e2-4c5c-b6fd-772e6ce28858-serving-cert\") pod \"route-controller-manager-6bb5547dcd-srn8j\" (UID: \"60a2c4e5-b6e2-4c5c-b6fd-772e6ce28858\") " pod="openshift-route-controller-manager/route-controller-manager-6bb5547dcd-srn8j" Feb 17 13:48:20 crc kubenswrapper[4833]: I0217 13:48:20.275465 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csd7c\" (UniqueName: \"kubernetes.io/projected/60a2c4e5-b6e2-4c5c-b6fd-772e6ce28858-kube-api-access-csd7c\") pod \"route-controller-manager-6bb5547dcd-srn8j\" (UID: \"60a2c4e5-b6e2-4c5c-b6fd-772e6ce28858\") " pod="openshift-route-controller-manager/route-controller-manager-6bb5547dcd-srn8j" Feb 17 13:48:20 crc kubenswrapper[4833]: I0217 13:48:20.275491 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/60a2c4e5-b6e2-4c5c-b6fd-772e6ce28858-client-ca\") pod \"route-controller-manager-6bb5547dcd-srn8j\" (UID: \"60a2c4e5-b6e2-4c5c-b6fd-772e6ce28858\") " pod="openshift-route-controller-manager/route-controller-manager-6bb5547dcd-srn8j" Feb 17 13:48:20 crc kubenswrapper[4833]: I0217 13:48:20.275513 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60a2c4e5-b6e2-4c5c-b6fd-772e6ce28858-config\") pod \"route-controller-manager-6bb5547dcd-srn8j\" (UID: \"60a2c4e5-b6e2-4c5c-b6fd-772e6ce28858\") " pod="openshift-route-controller-manager/route-controller-manager-6bb5547dcd-srn8j" Feb 17 13:48:20 crc kubenswrapper[4833]: I0217 13:48:20.276928 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60a2c4e5-b6e2-4c5c-b6fd-772e6ce28858-config\") pod \"route-controller-manager-6bb5547dcd-srn8j\" (UID: \"60a2c4e5-b6e2-4c5c-b6fd-772e6ce28858\") " pod="openshift-route-controller-manager/route-controller-manager-6bb5547dcd-srn8j" Feb 17 13:48:20 crc kubenswrapper[4833]: I0217 13:48:20.278254 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/60a2c4e5-b6e2-4c5c-b6fd-772e6ce28858-client-ca\") pod \"route-controller-manager-6bb5547dcd-srn8j\" (UID: \"60a2c4e5-b6e2-4c5c-b6fd-772e6ce28858\") " pod="openshift-route-controller-manager/route-controller-manager-6bb5547dcd-srn8j" Feb 17 13:48:20 crc kubenswrapper[4833]: I0217 13:48:20.281527 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/60a2c4e5-b6e2-4c5c-b6fd-772e6ce28858-serving-cert\") pod \"route-controller-manager-6bb5547dcd-srn8j\" (UID: \"60a2c4e5-b6e2-4c5c-b6fd-772e6ce28858\") " pod="openshift-route-controller-manager/route-controller-manager-6bb5547dcd-srn8j" Feb 17 13:48:20 crc kubenswrapper[4833]: I0217 13:48:20.295488 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csd7c\" (UniqueName: \"kubernetes.io/projected/60a2c4e5-b6e2-4c5c-b6fd-772e6ce28858-kube-api-access-csd7c\") pod \"route-controller-manager-6bb5547dcd-srn8j\" (UID: \"60a2c4e5-b6e2-4c5c-b6fd-772e6ce28858\") " pod="openshift-route-controller-manager/route-controller-manager-6bb5547dcd-srn8j" Feb 17 13:48:20 crc kubenswrapper[4833]: I0217 13:48:20.340370 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bb5547dcd-srn8j" Feb 17 13:48:20 crc kubenswrapper[4833]: I0217 13:48:20.389733 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-77c6f5cf88-4g5jk"] Feb 17 13:48:20 crc kubenswrapper[4833]: W0217 13:48:20.410438 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41127460_87b7_47b4_81c2_cebc7667e185.slice/crio-8d9ac39092e9b9e157fd80a1363cd4fdfbd1b523ce1cf4db947e76423591ff93 WatchSource:0}: Error finding container 8d9ac39092e9b9e157fd80a1363cd4fdfbd1b523ce1cf4db947e76423591ff93: Status 404 returned error can't find the container with id 8d9ac39092e9b9e157fd80a1363cd4fdfbd1b523ce1cf4db947e76423591ff93 Feb 17 13:48:20 crc kubenswrapper[4833]: I0217 13:48:20.609829 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-s22hg" Feb 17 13:48:20 crc kubenswrapper[4833]: I0217 13:48:20.609980 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-s22hg" Feb 17 13:48:20 crc kubenswrapper[4833]: I0217 13:48:20.660456 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-s22hg" Feb 17 13:48:20 crc kubenswrapper[4833]: I0217 13:48:20.756586 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77c6f5cf88-4g5jk" event={"ID":"41127460-87b7-47b4-81c2-cebc7667e185","Type":"ContainerStarted","Data":"c7a953bcc07cdfb8d1d6b38264b598ecbc66b060ebdec3ece6938c4b14c688f1"} Feb 17 13:48:20 crc kubenswrapper[4833]: I0217 13:48:20.757158 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77c6f5cf88-4g5jk" event={"ID":"41127460-87b7-47b4-81c2-cebc7667e185","Type":"ContainerStarted","Data":"8d9ac39092e9b9e157fd80a1363cd4fdfbd1b523ce1cf4db947e76423591ff93"} Feb 17 13:48:20 crc kubenswrapper[4833]: I0217 13:48:20.757208 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-77c6f5cf88-4g5jk" Feb 17 13:48:20 crc kubenswrapper[4833]: I0217 13:48:20.759694 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86d9bb6595-n4r7b" Feb 17 13:48:20 crc kubenswrapper[4833]: I0217 13:48:20.759721 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86d9bb6595-n4r7b" event={"ID":"dbd2e74a-aacb-4270-b52a-d802f54b4c24","Type":"ContainerDied","Data":"912904c0ecd004a4331ee223f1bdcaec906047b41059af3590e98af97ba80bb0"} Feb 17 13:48:20 crc kubenswrapper[4833]: I0217 13:48:20.759765 4833 scope.go:117] "RemoveContainer" containerID="4aa338daf12f53a8b6a98044384431a50ab11ce2d85ea4cd9b45374fff7fa8b7" Feb 17 13:48:20 crc kubenswrapper[4833]: I0217 13:48:20.762496 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-746d77cdc8-fvnzc" event={"ID":"df0fc04a-005e-4507-91ff-3ed49093754e","Type":"ContainerDied","Data":"52aedf7f5f215d75f7ad2e614e0c255477f224dfbeeced5fab1e1cf495226c99"} Feb 17 13:48:20 crc kubenswrapper[4833]: I0217 13:48:20.762668 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-746d77cdc8-fvnzc" Feb 17 13:48:20 crc kubenswrapper[4833]: I0217 13:48:20.764767 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-77c6f5cf88-4g5jk" Feb 17 13:48:20 crc kubenswrapper[4833]: I0217 13:48:20.798124 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-77c6f5cf88-4g5jk" podStartSLOduration=2.79810654 podStartE2EDuration="2.79810654s" podCreationTimestamp="2026-02-17 13:48:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:48:20.796366678 +0000 UTC m=+190.431466121" watchObservedRunningTime="2026-02-17 13:48:20.79810654 +0000 UTC m=+190.433205973" Feb 17 13:48:20 crc kubenswrapper[4833]: I0217 13:48:20.799321 4833 scope.go:117] "RemoveContainer" containerID="1549a66c975ba5657ffe2795b2bafebc8641a2638c2b43414c03b351956adb52" Feb 17 13:48:20 crc kubenswrapper[4833]: I0217 13:48:20.839962 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bb5547dcd-srn8j"] Feb 17 13:48:20 crc kubenswrapper[4833]: W0217 13:48:20.843403 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod60a2c4e5_b6e2_4c5c_b6fd_772e6ce28858.slice/crio-7c4b5cbb847506afcb8b6207a7eaac3de2bb1642a6d5357b83addd0dc603e7da WatchSource:0}: Error finding container 7c4b5cbb847506afcb8b6207a7eaac3de2bb1642a6d5357b83addd0dc603e7da: Status 404 returned error can't find the container with id 7c4b5cbb847506afcb8b6207a7eaac3de2bb1642a6d5357b83addd0dc603e7da Feb 17 13:48:20 crc kubenswrapper[4833]: I0217 13:48:20.859087 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-746d77cdc8-fvnzc"] Feb 17 13:48:20 crc kubenswrapper[4833]: I0217 13:48:20.871190 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-746d77cdc8-fvnzc"] Feb 17 13:48:20 crc kubenswrapper[4833]: I0217 13:48:20.889616 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86d9bb6595-n4r7b"] Feb 17 13:48:20 crc kubenswrapper[4833]: I0217 13:48:20.902397 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86d9bb6595-n4r7b"] Feb 17 13:48:21 crc kubenswrapper[4833]: I0217 13:48:21.028904 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mdg8m" Feb 17 13:48:21 crc kubenswrapper[4833]: I0217 13:48:21.028995 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mdg8m" Feb 17 13:48:21 crc kubenswrapper[4833]: I0217 13:48:21.051667 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbd2e74a-aacb-4270-b52a-d802f54b4c24" path="/var/lib/kubelet/pods/dbd2e74a-aacb-4270-b52a-d802f54b4c24/volumes" Feb 17 13:48:21 crc kubenswrapper[4833]: I0217 13:48:21.052200 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df0fc04a-005e-4507-91ff-3ed49093754e" path="/var/lib/kubelet/pods/df0fc04a-005e-4507-91ff-3ed49093754e/volumes" Feb 17 13:48:21 crc kubenswrapper[4833]: I0217 13:48:21.067424 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mdg8m" Feb 17 13:48:21 crc kubenswrapper[4833]: I0217 13:48:21.233287 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-h2rpb"] Feb 17 13:48:21 crc kubenswrapper[4833]: I0217 13:48:21.772028 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bb5547dcd-srn8j" event={"ID":"60a2c4e5-b6e2-4c5c-b6fd-772e6ce28858","Type":"ContainerStarted","Data":"488ec823a87ffbc6a821a73f6be0a1f7cdf1cda7f628fb30d795893fdd2ac57d"} Feb 17 13:48:21 crc kubenswrapper[4833]: I0217 13:48:21.772290 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bb5547dcd-srn8j" event={"ID":"60a2c4e5-b6e2-4c5c-b6fd-772e6ce28858","Type":"ContainerStarted","Data":"7c4b5cbb847506afcb8b6207a7eaac3de2bb1642a6d5357b83addd0dc603e7da"} Feb 17 13:48:21 crc kubenswrapper[4833]: I0217 13:48:21.772304 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6bb5547dcd-srn8j" Feb 17 13:48:21 crc kubenswrapper[4833]: I0217 13:48:21.778642 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6bb5547dcd-srn8j" Feb 17 13:48:21 crc kubenswrapper[4833]: I0217 13:48:21.792247 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6bb5547dcd-srn8j" podStartSLOduration=3.792231881 podStartE2EDuration="3.792231881s" podCreationTimestamp="2026-02-17 13:48:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:48:21.789604433 +0000 UTC m=+191.424703866" watchObservedRunningTime="2026-02-17 13:48:21.792231881 +0000 UTC m=+191.427331314" Feb 17 13:48:21 crc kubenswrapper[4833]: I0217 13:48:21.832134 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-s22hg" Feb 17 13:48:21 crc kubenswrapper[4833]: I0217 13:48:21.843736 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mdg8m" Feb 17 13:48:21 crc kubenswrapper[4833]: I0217 13:48:21.969233 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wgmx8" Feb 17 13:48:21 crc kubenswrapper[4833]: I0217 13:48:21.969311 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wgmx8" Feb 17 13:48:22 crc kubenswrapper[4833]: I0217 13:48:22.375547 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7wzzq" Feb 17 13:48:22 crc kubenswrapper[4833]: I0217 13:48:22.375611 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7wzzq" Feb 17 13:48:22 crc kubenswrapper[4833]: I0217 13:48:22.908854 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 17 13:48:22 crc kubenswrapper[4833]: I0217 13:48:22.909613 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 13:48:22 crc kubenswrapper[4833]: I0217 13:48:22.911734 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 17 13:48:22 crc kubenswrapper[4833]: I0217 13:48:22.911766 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 17 13:48:22 crc kubenswrapper[4833]: I0217 13:48:22.920968 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 17 13:48:23 crc kubenswrapper[4833]: I0217 13:48:23.008525 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d116908c-71af-4c4b-819d-ee668ea936e9-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d116908c-71af-4c4b-819d-ee668ea936e9\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 13:48:23 crc kubenswrapper[4833]: I0217 13:48:23.008607 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d116908c-71af-4c4b-819d-ee668ea936e9-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d116908c-71af-4c4b-819d-ee668ea936e9\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 13:48:23 crc kubenswrapper[4833]: I0217 13:48:23.009304 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wgmx8" podUID="5fc2132e-3150-4783-a56a-0bd9f33d4c6c" containerName="registry-server" probeResult="failure" output=< Feb 17 13:48:23 crc kubenswrapper[4833]: timeout: failed to connect service ":50051" within 1s Feb 17 13:48:23 crc kubenswrapper[4833]: > Feb 17 13:48:23 crc kubenswrapper[4833]: I0217 13:48:23.110430 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d116908c-71af-4c4b-819d-ee668ea936e9-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d116908c-71af-4c4b-819d-ee668ea936e9\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 13:48:23 crc kubenswrapper[4833]: I0217 13:48:23.110540 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d116908c-71af-4c4b-819d-ee668ea936e9-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d116908c-71af-4c4b-819d-ee668ea936e9\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 13:48:23 crc kubenswrapper[4833]: I0217 13:48:23.110566 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d116908c-71af-4c4b-819d-ee668ea936e9-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d116908c-71af-4c4b-819d-ee668ea936e9\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 13:48:23 crc kubenswrapper[4833]: I0217 13:48:23.128746 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d116908c-71af-4c4b-819d-ee668ea936e9-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d116908c-71af-4c4b-819d-ee668ea936e9\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 13:48:23 crc kubenswrapper[4833]: I0217 13:48:23.236943 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 13:48:23 crc kubenswrapper[4833]: I0217 13:48:23.416716 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7wzzq" podUID="347afd5b-1c04-4364-87ef-82bf98a454a6" containerName="registry-server" probeResult="failure" output=< Feb 17 13:48:23 crc kubenswrapper[4833]: timeout: failed to connect service ":50051" within 1s Feb 17 13:48:23 crc kubenswrapper[4833]: > Feb 17 13:48:23 crc kubenswrapper[4833]: I0217 13:48:23.687506 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 17 13:48:23 crc kubenswrapper[4833]: I0217 13:48:23.783841 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d116908c-71af-4c4b-819d-ee668ea936e9","Type":"ContainerStarted","Data":"beebb61edbd4dcc409c291f37545fd80c83fb8c53df6cb84ce7a319e80af89b1"} Feb 17 13:48:24 crc kubenswrapper[4833]: I0217 13:48:24.059556 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mdg8m"] Feb 17 13:48:24 crc kubenswrapper[4833]: I0217 13:48:24.059766 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mdg8m" podUID="0ef940cc-d662-4a1c-aee3-09c28bfac646" containerName="registry-server" containerID="cri-o://f5e2964303cf3ff7ba5a174dfb36ea99ca59f3caf5434c794b261631faf10422" gracePeriod=2 Feb 17 13:48:24 crc kubenswrapper[4833]: I0217 13:48:24.560718 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mdg8m" Feb 17 13:48:24 crc kubenswrapper[4833]: I0217 13:48:24.729949 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5bjw\" (UniqueName: \"kubernetes.io/projected/0ef940cc-d662-4a1c-aee3-09c28bfac646-kube-api-access-z5bjw\") pod \"0ef940cc-d662-4a1c-aee3-09c28bfac646\" (UID: \"0ef940cc-d662-4a1c-aee3-09c28bfac646\") " Feb 17 13:48:24 crc kubenswrapper[4833]: I0217 13:48:24.730091 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ef940cc-d662-4a1c-aee3-09c28bfac646-utilities\") pod \"0ef940cc-d662-4a1c-aee3-09c28bfac646\" (UID: \"0ef940cc-d662-4a1c-aee3-09c28bfac646\") " Feb 17 13:48:24 crc kubenswrapper[4833]: I0217 13:48:24.730113 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ef940cc-d662-4a1c-aee3-09c28bfac646-catalog-content\") pod \"0ef940cc-d662-4a1c-aee3-09c28bfac646\" (UID: \"0ef940cc-d662-4a1c-aee3-09c28bfac646\") " Feb 17 13:48:24 crc kubenswrapper[4833]: I0217 13:48:24.731814 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ef940cc-d662-4a1c-aee3-09c28bfac646-utilities" (OuterVolumeSpecName: "utilities") pod "0ef940cc-d662-4a1c-aee3-09c28bfac646" (UID: "0ef940cc-d662-4a1c-aee3-09c28bfac646"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:48:24 crc kubenswrapper[4833]: I0217 13:48:24.737353 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ef940cc-d662-4a1c-aee3-09c28bfac646-kube-api-access-z5bjw" (OuterVolumeSpecName: "kube-api-access-z5bjw") pod "0ef940cc-d662-4a1c-aee3-09c28bfac646" (UID: "0ef940cc-d662-4a1c-aee3-09c28bfac646"). InnerVolumeSpecName "kube-api-access-z5bjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:48:24 crc kubenswrapper[4833]: I0217 13:48:24.768525 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ef940cc-d662-4a1c-aee3-09c28bfac646-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0ef940cc-d662-4a1c-aee3-09c28bfac646" (UID: "0ef940cc-d662-4a1c-aee3-09c28bfac646"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:48:24 crc kubenswrapper[4833]: I0217 13:48:24.794990 4833 generic.go:334] "Generic (PLEG): container finished" podID="0ef940cc-d662-4a1c-aee3-09c28bfac646" containerID="f5e2964303cf3ff7ba5a174dfb36ea99ca59f3caf5434c794b261631faf10422" exitCode=0 Feb 17 13:48:24 crc kubenswrapper[4833]: I0217 13:48:24.795075 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mdg8m" event={"ID":"0ef940cc-d662-4a1c-aee3-09c28bfac646","Type":"ContainerDied","Data":"f5e2964303cf3ff7ba5a174dfb36ea99ca59f3caf5434c794b261631faf10422"} Feb 17 13:48:24 crc kubenswrapper[4833]: I0217 13:48:24.795121 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mdg8m" event={"ID":"0ef940cc-d662-4a1c-aee3-09c28bfac646","Type":"ContainerDied","Data":"5ef6137cea2636138d5a69f21e0d75c281a6e5e18e6d8828e19281f16df6f701"} Feb 17 13:48:24 crc kubenswrapper[4833]: I0217 13:48:24.795141 4833 scope.go:117] "RemoveContainer" containerID="f5e2964303cf3ff7ba5a174dfb36ea99ca59f3caf5434c794b261631faf10422" Feb 17 13:48:24 crc kubenswrapper[4833]: I0217 13:48:24.795185 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mdg8m" Feb 17 13:48:24 crc kubenswrapper[4833]: I0217 13:48:24.796979 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d116908c-71af-4c4b-819d-ee668ea936e9","Type":"ContainerStarted","Data":"d7f7f8dd257ffee7adb50f16ea2f80df7f152ef194435c77d6a291c86c249b2c"} Feb 17 13:48:24 crc kubenswrapper[4833]: I0217 13:48:24.817475 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=2.817452752 podStartE2EDuration="2.817452752s" podCreationTimestamp="2026-02-17 13:48:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:48:24.816767122 +0000 UTC m=+194.451866555" watchObservedRunningTime="2026-02-17 13:48:24.817452752 +0000 UTC m=+194.452552185" Feb 17 13:48:24 crc kubenswrapper[4833]: I0217 13:48:24.830700 4833 scope.go:117] "RemoveContainer" containerID="5b021bd72ed4c524030c26eba12b54f72206f0f2ae6d1fd76a3254f8aae97d2d" Feb 17 13:48:24 crc kubenswrapper[4833]: I0217 13:48:24.832394 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ef940cc-d662-4a1c-aee3-09c28bfac646-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:24 crc kubenswrapper[4833]: I0217 13:48:24.832425 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ef940cc-d662-4a1c-aee3-09c28bfac646-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:24 crc kubenswrapper[4833]: I0217 13:48:24.832437 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5bjw\" (UniqueName: \"kubernetes.io/projected/0ef940cc-d662-4a1c-aee3-09c28bfac646-kube-api-access-z5bjw\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:24 crc kubenswrapper[4833]: I0217 13:48:24.833094 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mdg8m"] Feb 17 13:48:24 crc kubenswrapper[4833]: I0217 13:48:24.843206 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mdg8m"] Feb 17 13:48:24 crc kubenswrapper[4833]: I0217 13:48:24.883092 4833 scope.go:117] "RemoveContainer" containerID="6faa6244028001a3cbc5aec840a5fd13b54f71c92fa84a8d4a988f2aed9f40b4" Feb 17 13:48:24 crc kubenswrapper[4833]: I0217 13:48:24.905248 4833 scope.go:117] "RemoveContainer" containerID="f5e2964303cf3ff7ba5a174dfb36ea99ca59f3caf5434c794b261631faf10422" Feb 17 13:48:24 crc kubenswrapper[4833]: E0217 13:48:24.905677 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5e2964303cf3ff7ba5a174dfb36ea99ca59f3caf5434c794b261631faf10422\": container with ID starting with f5e2964303cf3ff7ba5a174dfb36ea99ca59f3caf5434c794b261631faf10422 not found: ID does not exist" containerID="f5e2964303cf3ff7ba5a174dfb36ea99ca59f3caf5434c794b261631faf10422" Feb 17 13:48:24 crc kubenswrapper[4833]: I0217 13:48:24.905720 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5e2964303cf3ff7ba5a174dfb36ea99ca59f3caf5434c794b261631faf10422"} err="failed to get container status \"f5e2964303cf3ff7ba5a174dfb36ea99ca59f3caf5434c794b261631faf10422\": rpc error: code = NotFound desc = could not find container \"f5e2964303cf3ff7ba5a174dfb36ea99ca59f3caf5434c794b261631faf10422\": container with ID starting with f5e2964303cf3ff7ba5a174dfb36ea99ca59f3caf5434c794b261631faf10422 not found: ID does not exist" Feb 17 13:48:24 crc kubenswrapper[4833]: I0217 13:48:24.905771 4833 scope.go:117] "RemoveContainer" containerID="5b021bd72ed4c524030c26eba12b54f72206f0f2ae6d1fd76a3254f8aae97d2d" Feb 17 13:48:24 crc kubenswrapper[4833]: E0217 13:48:24.906103 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b021bd72ed4c524030c26eba12b54f72206f0f2ae6d1fd76a3254f8aae97d2d\": container with ID starting with 5b021bd72ed4c524030c26eba12b54f72206f0f2ae6d1fd76a3254f8aae97d2d not found: ID does not exist" containerID="5b021bd72ed4c524030c26eba12b54f72206f0f2ae6d1fd76a3254f8aae97d2d" Feb 17 13:48:24 crc kubenswrapper[4833]: I0217 13:48:24.906170 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b021bd72ed4c524030c26eba12b54f72206f0f2ae6d1fd76a3254f8aae97d2d"} err="failed to get container status \"5b021bd72ed4c524030c26eba12b54f72206f0f2ae6d1fd76a3254f8aae97d2d\": rpc error: code = NotFound desc = could not find container \"5b021bd72ed4c524030c26eba12b54f72206f0f2ae6d1fd76a3254f8aae97d2d\": container with ID starting with 5b021bd72ed4c524030c26eba12b54f72206f0f2ae6d1fd76a3254f8aae97d2d not found: ID does not exist" Feb 17 13:48:24 crc kubenswrapper[4833]: I0217 13:48:24.906207 4833 scope.go:117] "RemoveContainer" containerID="6faa6244028001a3cbc5aec840a5fd13b54f71c92fa84a8d4a988f2aed9f40b4" Feb 17 13:48:24 crc kubenswrapper[4833]: E0217 13:48:24.906803 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6faa6244028001a3cbc5aec840a5fd13b54f71c92fa84a8d4a988f2aed9f40b4\": container with ID starting with 6faa6244028001a3cbc5aec840a5fd13b54f71c92fa84a8d4a988f2aed9f40b4 not found: ID does not exist" containerID="6faa6244028001a3cbc5aec840a5fd13b54f71c92fa84a8d4a988f2aed9f40b4" Feb 17 13:48:24 crc kubenswrapper[4833]: I0217 13:48:24.906847 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6faa6244028001a3cbc5aec840a5fd13b54f71c92fa84a8d4a988f2aed9f40b4"} err="failed to get container status \"6faa6244028001a3cbc5aec840a5fd13b54f71c92fa84a8d4a988f2aed9f40b4\": rpc error: code = NotFound desc = could not find container \"6faa6244028001a3cbc5aec840a5fd13b54f71c92fa84a8d4a988f2aed9f40b4\": container with ID starting with 6faa6244028001a3cbc5aec840a5fd13b54f71c92fa84a8d4a988f2aed9f40b4 not found: ID does not exist" Feb 17 13:48:25 crc kubenswrapper[4833]: I0217 13:48:25.052655 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ef940cc-d662-4a1c-aee3-09c28bfac646" path="/var/lib/kubelet/pods/0ef940cc-d662-4a1c-aee3-09c28bfac646/volumes" Feb 17 13:48:25 crc kubenswrapper[4833]: I0217 13:48:25.806274 4833 generic.go:334] "Generic (PLEG): container finished" podID="d116908c-71af-4c4b-819d-ee668ea936e9" containerID="d7f7f8dd257ffee7adb50f16ea2f80df7f152ef194435c77d6a291c86c249b2c" exitCode=0 Feb 17 13:48:25 crc kubenswrapper[4833]: I0217 13:48:25.806347 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d116908c-71af-4c4b-819d-ee668ea936e9","Type":"ContainerDied","Data":"d7f7f8dd257ffee7adb50f16ea2f80df7f152ef194435c77d6a291c86c249b2c"} Feb 17 13:48:27 crc kubenswrapper[4833]: I0217 13:48:27.130120 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 13:48:27 crc kubenswrapper[4833]: I0217 13:48:27.259786 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d116908c-71af-4c4b-819d-ee668ea936e9-kubelet-dir\") pod \"d116908c-71af-4c4b-819d-ee668ea936e9\" (UID: \"d116908c-71af-4c4b-819d-ee668ea936e9\") " Feb 17 13:48:27 crc kubenswrapper[4833]: I0217 13:48:27.259873 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d116908c-71af-4c4b-819d-ee668ea936e9-kube-api-access\") pod \"d116908c-71af-4c4b-819d-ee668ea936e9\" (UID: \"d116908c-71af-4c4b-819d-ee668ea936e9\") " Feb 17 13:48:27 crc kubenswrapper[4833]: I0217 13:48:27.259917 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d116908c-71af-4c4b-819d-ee668ea936e9-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d116908c-71af-4c4b-819d-ee668ea936e9" (UID: "d116908c-71af-4c4b-819d-ee668ea936e9"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 13:48:27 crc kubenswrapper[4833]: I0217 13:48:27.260162 4833 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d116908c-71af-4c4b-819d-ee668ea936e9-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:27 crc kubenswrapper[4833]: I0217 13:48:27.270817 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d116908c-71af-4c4b-819d-ee668ea936e9-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d116908c-71af-4c4b-819d-ee668ea936e9" (UID: "d116908c-71af-4c4b-819d-ee668ea936e9"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:48:27 crc kubenswrapper[4833]: I0217 13:48:27.364092 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d116908c-71af-4c4b-819d-ee668ea936e9-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:27 crc kubenswrapper[4833]: I0217 13:48:27.830578 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d116908c-71af-4c4b-819d-ee668ea936e9","Type":"ContainerDied","Data":"beebb61edbd4dcc409c291f37545fd80c83fb8c53df6cb84ce7a319e80af89b1"} Feb 17 13:48:27 crc kubenswrapper[4833]: I0217 13:48:27.830632 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="beebb61edbd4dcc409c291f37545fd80c83fb8c53df6cb84ce7a319e80af89b1" Feb 17 13:48:27 crc kubenswrapper[4833]: I0217 13:48:27.830706 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 13:48:28 crc kubenswrapper[4833]: I0217 13:48:28.661844 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5q7l2" Feb 17 13:48:28 crc kubenswrapper[4833]: I0217 13:48:28.710044 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5q7l2" Feb 17 13:48:28 crc kubenswrapper[4833]: I0217 13:48:28.870963 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vnkbw" Feb 17 13:48:28 crc kubenswrapper[4833]: I0217 13:48:28.914009 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vnkbw" Feb 17 13:48:29 crc kubenswrapper[4833]: I0217 13:48:29.060341 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-g6vpd" Feb 17 13:48:29 crc kubenswrapper[4833]: I0217 13:48:29.099927 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-g6vpd" Feb 17 13:48:29 crc kubenswrapper[4833]: I0217 13:48:29.290592 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wrrv5" Feb 17 13:48:29 crc kubenswrapper[4833]: I0217 13:48:29.290635 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wrrv5" Feb 17 13:48:29 crc kubenswrapper[4833]: I0217 13:48:29.326116 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wrrv5" Feb 17 13:48:29 crc kubenswrapper[4833]: I0217 13:48:29.914303 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wrrv5" Feb 17 13:48:30 crc kubenswrapper[4833]: I0217 13:48:30.115510 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 17 13:48:30 crc kubenswrapper[4833]: E0217 13:48:30.116473 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d116908c-71af-4c4b-819d-ee668ea936e9" containerName="pruner" Feb 17 13:48:30 crc kubenswrapper[4833]: I0217 13:48:30.116519 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="d116908c-71af-4c4b-819d-ee668ea936e9" containerName="pruner" Feb 17 13:48:30 crc kubenswrapper[4833]: E0217 13:48:30.116574 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ef940cc-d662-4a1c-aee3-09c28bfac646" containerName="registry-server" Feb 17 13:48:30 crc kubenswrapper[4833]: I0217 13:48:30.116597 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ef940cc-d662-4a1c-aee3-09c28bfac646" containerName="registry-server" Feb 17 13:48:30 crc kubenswrapper[4833]: E0217 13:48:30.116642 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ef940cc-d662-4a1c-aee3-09c28bfac646" containerName="extract-content" Feb 17 13:48:30 crc kubenswrapper[4833]: I0217 13:48:30.116662 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ef940cc-d662-4a1c-aee3-09c28bfac646" containerName="extract-content" Feb 17 13:48:30 crc kubenswrapper[4833]: E0217 13:48:30.116724 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ef940cc-d662-4a1c-aee3-09c28bfac646" containerName="extract-utilities" Feb 17 13:48:30 crc kubenswrapper[4833]: I0217 13:48:30.116743 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ef940cc-d662-4a1c-aee3-09c28bfac646" containerName="extract-utilities" Feb 17 13:48:30 crc kubenswrapper[4833]: I0217 13:48:30.117278 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="d116908c-71af-4c4b-819d-ee668ea936e9" containerName="pruner" Feb 17 13:48:30 crc kubenswrapper[4833]: I0217 13:48:30.117320 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ef940cc-d662-4a1c-aee3-09c28bfac646" containerName="registry-server" Feb 17 13:48:30 crc kubenswrapper[4833]: I0217 13:48:30.118330 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 17 13:48:30 crc kubenswrapper[4833]: I0217 13:48:30.126585 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 17 13:48:30 crc kubenswrapper[4833]: I0217 13:48:30.126924 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 17 13:48:30 crc kubenswrapper[4833]: I0217 13:48:30.148670 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 17 13:48:30 crc kubenswrapper[4833]: I0217 13:48:30.203029 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/42b66026-007c-4217-a549-f108d5c880e5-kubelet-dir\") pod \"installer-9-crc\" (UID: \"42b66026-007c-4217-a549-f108d5c880e5\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 13:48:30 crc kubenswrapper[4833]: I0217 13:48:30.203296 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/42b66026-007c-4217-a549-f108d5c880e5-kube-api-access\") pod \"installer-9-crc\" (UID: \"42b66026-007c-4217-a549-f108d5c880e5\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 13:48:30 crc kubenswrapper[4833]: I0217 13:48:30.203700 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/42b66026-007c-4217-a549-f108d5c880e5-var-lock\") pod \"installer-9-crc\" (UID: \"42b66026-007c-4217-a549-f108d5c880e5\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 13:48:30 crc kubenswrapper[4833]: I0217 13:48:30.305250 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/42b66026-007c-4217-a549-f108d5c880e5-kubelet-dir\") pod \"installer-9-crc\" (UID: \"42b66026-007c-4217-a549-f108d5c880e5\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 13:48:30 crc kubenswrapper[4833]: I0217 13:48:30.305315 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/42b66026-007c-4217-a549-f108d5c880e5-kube-api-access\") pod \"installer-9-crc\" (UID: \"42b66026-007c-4217-a549-f108d5c880e5\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 13:48:30 crc kubenswrapper[4833]: I0217 13:48:30.305353 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/42b66026-007c-4217-a549-f108d5c880e5-var-lock\") pod \"installer-9-crc\" (UID: \"42b66026-007c-4217-a549-f108d5c880e5\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 13:48:30 crc kubenswrapper[4833]: I0217 13:48:30.305426 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/42b66026-007c-4217-a549-f108d5c880e5-var-lock\") pod \"installer-9-crc\" (UID: \"42b66026-007c-4217-a549-f108d5c880e5\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 13:48:30 crc kubenswrapper[4833]: I0217 13:48:30.305459 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/42b66026-007c-4217-a549-f108d5c880e5-kubelet-dir\") pod \"installer-9-crc\" (UID: \"42b66026-007c-4217-a549-f108d5c880e5\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 13:48:30 crc kubenswrapper[4833]: I0217 13:48:30.335093 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/42b66026-007c-4217-a549-f108d5c880e5-kube-api-access\") pod \"installer-9-crc\" (UID: \"42b66026-007c-4217-a549-f108d5c880e5\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 13:48:30 crc kubenswrapper[4833]: I0217 13:48:30.467135 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 17 13:48:30 crc kubenswrapper[4833]: I0217 13:48:30.891220 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 17 13:48:30 crc kubenswrapper[4833]: W0217 13:48:30.904964 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod42b66026_007c_4217_a549_f108d5c880e5.slice/crio-9d26d1536dc65aaf3959181781e2a8a5f7deeb0c4f14bcc8f868bc962e79464f WatchSource:0}: Error finding container 9d26d1536dc65aaf3959181781e2a8a5f7deeb0c4f14bcc8f868bc962e79464f: Status 404 returned error can't find the container with id 9d26d1536dc65aaf3959181781e2a8a5f7deeb0c4f14bcc8f868bc962e79464f Feb 17 13:48:31 crc kubenswrapper[4833]: I0217 13:48:31.062898 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g6vpd"] Feb 17 13:48:31 crc kubenswrapper[4833]: I0217 13:48:31.064442 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-g6vpd" podUID="73094c1e-9f36-472d-9275-12562b4cd250" containerName="registry-server" containerID="cri-o://44269ffe950548083987f2b9d021910aa03303c09c6856a589d488fd5883bc10" gracePeriod=2 Feb 17 13:48:31 crc kubenswrapper[4833]: I0217 13:48:31.296378 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wrrv5"] Feb 17 13:48:31 crc kubenswrapper[4833]: I0217 13:48:31.570992 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g6vpd" Feb 17 13:48:31 crc kubenswrapper[4833]: I0217 13:48:31.724391 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73094c1e-9f36-472d-9275-12562b4cd250-catalog-content\") pod \"73094c1e-9f36-472d-9275-12562b4cd250\" (UID: \"73094c1e-9f36-472d-9275-12562b4cd250\") " Feb 17 13:48:31 crc kubenswrapper[4833]: I0217 13:48:31.724440 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73094c1e-9f36-472d-9275-12562b4cd250-utilities\") pod \"73094c1e-9f36-472d-9275-12562b4cd250\" (UID: \"73094c1e-9f36-472d-9275-12562b4cd250\") " Feb 17 13:48:31 crc kubenswrapper[4833]: I0217 13:48:31.724513 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmqfk\" (UniqueName: \"kubernetes.io/projected/73094c1e-9f36-472d-9275-12562b4cd250-kube-api-access-wmqfk\") pod \"73094c1e-9f36-472d-9275-12562b4cd250\" (UID: \"73094c1e-9f36-472d-9275-12562b4cd250\") " Feb 17 13:48:31 crc kubenswrapper[4833]: I0217 13:48:31.725511 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73094c1e-9f36-472d-9275-12562b4cd250-utilities" (OuterVolumeSpecName: "utilities") pod "73094c1e-9f36-472d-9275-12562b4cd250" (UID: "73094c1e-9f36-472d-9275-12562b4cd250"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:48:31 crc kubenswrapper[4833]: I0217 13:48:31.731995 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73094c1e-9f36-472d-9275-12562b4cd250-kube-api-access-wmqfk" (OuterVolumeSpecName: "kube-api-access-wmqfk") pod "73094c1e-9f36-472d-9275-12562b4cd250" (UID: "73094c1e-9f36-472d-9275-12562b4cd250"). InnerVolumeSpecName "kube-api-access-wmqfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:48:31 crc kubenswrapper[4833]: I0217 13:48:31.797167 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73094c1e-9f36-472d-9275-12562b4cd250-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "73094c1e-9f36-472d-9275-12562b4cd250" (UID: "73094c1e-9f36-472d-9275-12562b4cd250"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:48:31 crc kubenswrapper[4833]: I0217 13:48:31.825763 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmqfk\" (UniqueName: \"kubernetes.io/projected/73094c1e-9f36-472d-9275-12562b4cd250-kube-api-access-wmqfk\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:31 crc kubenswrapper[4833]: I0217 13:48:31.825804 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73094c1e-9f36-472d-9275-12562b4cd250-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:31 crc kubenswrapper[4833]: I0217 13:48:31.825817 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73094c1e-9f36-472d-9275-12562b4cd250-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:31 crc kubenswrapper[4833]: I0217 13:48:31.857336 4833 generic.go:334] "Generic (PLEG): container finished" podID="73094c1e-9f36-472d-9275-12562b4cd250" containerID="44269ffe950548083987f2b9d021910aa03303c09c6856a589d488fd5883bc10" exitCode=0 Feb 17 13:48:31 crc kubenswrapper[4833]: I0217 13:48:31.857417 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g6vpd" event={"ID":"73094c1e-9f36-472d-9275-12562b4cd250","Type":"ContainerDied","Data":"44269ffe950548083987f2b9d021910aa03303c09c6856a589d488fd5883bc10"} Feb 17 13:48:31 crc kubenswrapper[4833]: I0217 13:48:31.857432 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g6vpd" Feb 17 13:48:31 crc kubenswrapper[4833]: I0217 13:48:31.857486 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g6vpd" event={"ID":"73094c1e-9f36-472d-9275-12562b4cd250","Type":"ContainerDied","Data":"10f710eca45f0e63ea5e364b5862eac8863409952474ae16202912682cce393e"} Feb 17 13:48:31 crc kubenswrapper[4833]: I0217 13:48:31.857508 4833 scope.go:117] "RemoveContainer" containerID="44269ffe950548083987f2b9d021910aa03303c09c6856a589d488fd5883bc10" Feb 17 13:48:31 crc kubenswrapper[4833]: I0217 13:48:31.864865 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wrrv5" podUID="96680c84-52a5-4ddc-a676-7a5c71c9f3f6" containerName="registry-server" containerID="cri-o://83dc4755f0c27cc77086682434a7b7710a03e8b46f7fd67b78d51b3ae7781606" gracePeriod=2 Feb 17 13:48:31 crc kubenswrapper[4833]: I0217 13:48:31.865772 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"42b66026-007c-4217-a549-f108d5c880e5","Type":"ContainerStarted","Data":"ca0ad02c4261aec9f4c943877bbed2fc17b9679b990202c6de26446668167c20"} Feb 17 13:48:31 crc kubenswrapper[4833]: I0217 13:48:31.865894 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"42b66026-007c-4217-a549-f108d5c880e5","Type":"ContainerStarted","Data":"9d26d1536dc65aaf3959181781e2a8a5f7deeb0c4f14bcc8f868bc962e79464f"} Feb 17 13:48:31 crc kubenswrapper[4833]: I0217 13:48:31.889997 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=1.889973406 podStartE2EDuration="1.889973406s" podCreationTimestamp="2026-02-17 13:48:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:48:31.884196305 +0000 UTC m=+201.519295778" watchObservedRunningTime="2026-02-17 13:48:31.889973406 +0000 UTC m=+201.525072849" Feb 17 13:48:31 crc kubenswrapper[4833]: I0217 13:48:31.902850 4833 scope.go:117] "RemoveContainer" containerID="0f625b7a9e80e5880b46701c41477ff7925a1c8972ea5cbf7eb76259b848cd04" Feb 17 13:48:31 crc kubenswrapper[4833]: I0217 13:48:31.920023 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g6vpd"] Feb 17 13:48:31 crc kubenswrapper[4833]: I0217 13:48:31.923662 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-g6vpd"] Feb 17 13:48:31 crc kubenswrapper[4833]: I0217 13:48:31.927262 4833 scope.go:117] "RemoveContainer" containerID="42c8912dc7cf099c2325dd22b282a3ff870fab18cbe1f90b2fa92779540618b1" Feb 17 13:48:31 crc kubenswrapper[4833]: I0217 13:48:31.951720 4833 scope.go:117] "RemoveContainer" containerID="44269ffe950548083987f2b9d021910aa03303c09c6856a589d488fd5883bc10" Feb 17 13:48:31 crc kubenswrapper[4833]: E0217 13:48:31.952198 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44269ffe950548083987f2b9d021910aa03303c09c6856a589d488fd5883bc10\": container with ID starting with 44269ffe950548083987f2b9d021910aa03303c09c6856a589d488fd5883bc10 not found: ID does not exist" containerID="44269ffe950548083987f2b9d021910aa03303c09c6856a589d488fd5883bc10" Feb 17 13:48:31 crc kubenswrapper[4833]: I0217 13:48:31.952371 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44269ffe950548083987f2b9d021910aa03303c09c6856a589d488fd5883bc10"} err="failed to get container status \"44269ffe950548083987f2b9d021910aa03303c09c6856a589d488fd5883bc10\": rpc error: code = NotFound desc = could not find container \"44269ffe950548083987f2b9d021910aa03303c09c6856a589d488fd5883bc10\": container with ID starting with 44269ffe950548083987f2b9d021910aa03303c09c6856a589d488fd5883bc10 not found: ID does not exist" Feb 17 13:48:31 crc kubenswrapper[4833]: I0217 13:48:31.952542 4833 scope.go:117] "RemoveContainer" containerID="0f625b7a9e80e5880b46701c41477ff7925a1c8972ea5cbf7eb76259b848cd04" Feb 17 13:48:31 crc kubenswrapper[4833]: E0217 13:48:31.953162 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f625b7a9e80e5880b46701c41477ff7925a1c8972ea5cbf7eb76259b848cd04\": container with ID starting with 0f625b7a9e80e5880b46701c41477ff7925a1c8972ea5cbf7eb76259b848cd04 not found: ID does not exist" containerID="0f625b7a9e80e5880b46701c41477ff7925a1c8972ea5cbf7eb76259b848cd04" Feb 17 13:48:31 crc kubenswrapper[4833]: I0217 13:48:31.953277 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f625b7a9e80e5880b46701c41477ff7925a1c8972ea5cbf7eb76259b848cd04"} err="failed to get container status \"0f625b7a9e80e5880b46701c41477ff7925a1c8972ea5cbf7eb76259b848cd04\": rpc error: code = NotFound desc = could not find container \"0f625b7a9e80e5880b46701c41477ff7925a1c8972ea5cbf7eb76259b848cd04\": container with ID starting with 0f625b7a9e80e5880b46701c41477ff7925a1c8972ea5cbf7eb76259b848cd04 not found: ID does not exist" Feb 17 13:48:31 crc kubenswrapper[4833]: I0217 13:48:31.953386 4833 scope.go:117] "RemoveContainer" containerID="42c8912dc7cf099c2325dd22b282a3ff870fab18cbe1f90b2fa92779540618b1" Feb 17 13:48:31 crc kubenswrapper[4833]: E0217 13:48:31.953816 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42c8912dc7cf099c2325dd22b282a3ff870fab18cbe1f90b2fa92779540618b1\": container with ID starting with 42c8912dc7cf099c2325dd22b282a3ff870fab18cbe1f90b2fa92779540618b1 not found: ID does not exist" containerID="42c8912dc7cf099c2325dd22b282a3ff870fab18cbe1f90b2fa92779540618b1" Feb 17 13:48:31 crc kubenswrapper[4833]: I0217 13:48:31.953970 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42c8912dc7cf099c2325dd22b282a3ff870fab18cbe1f90b2fa92779540618b1"} err="failed to get container status \"42c8912dc7cf099c2325dd22b282a3ff870fab18cbe1f90b2fa92779540618b1\": rpc error: code = NotFound desc = could not find container \"42c8912dc7cf099c2325dd22b282a3ff870fab18cbe1f90b2fa92779540618b1\": container with ID starting with 42c8912dc7cf099c2325dd22b282a3ff870fab18cbe1f90b2fa92779540618b1 not found: ID does not exist" Feb 17 13:48:32 crc kubenswrapper[4833]: I0217 13:48:32.022930 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wgmx8" Feb 17 13:48:32 crc kubenswrapper[4833]: I0217 13:48:32.077194 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wgmx8" Feb 17 13:48:32 crc kubenswrapper[4833]: I0217 13:48:32.295614 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wrrv5" Feb 17 13:48:32 crc kubenswrapper[4833]: I0217 13:48:32.421920 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7wzzq" Feb 17 13:48:32 crc kubenswrapper[4833]: I0217 13:48:32.431805 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96680c84-52a5-4ddc-a676-7a5c71c9f3f6-catalog-content\") pod \"96680c84-52a5-4ddc-a676-7a5c71c9f3f6\" (UID: \"96680c84-52a5-4ddc-a676-7a5c71c9f3f6\") " Feb 17 13:48:32 crc kubenswrapper[4833]: I0217 13:48:32.431976 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69d5t\" (UniqueName: \"kubernetes.io/projected/96680c84-52a5-4ddc-a676-7a5c71c9f3f6-kube-api-access-69d5t\") pod \"96680c84-52a5-4ddc-a676-7a5c71c9f3f6\" (UID: \"96680c84-52a5-4ddc-a676-7a5c71c9f3f6\") " Feb 17 13:48:32 crc kubenswrapper[4833]: I0217 13:48:32.432072 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96680c84-52a5-4ddc-a676-7a5c71c9f3f6-utilities\") pod \"96680c84-52a5-4ddc-a676-7a5c71c9f3f6\" (UID: \"96680c84-52a5-4ddc-a676-7a5c71c9f3f6\") " Feb 17 13:48:32 crc kubenswrapper[4833]: I0217 13:48:32.433860 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96680c84-52a5-4ddc-a676-7a5c71c9f3f6-utilities" (OuterVolumeSpecName: "utilities") pod "96680c84-52a5-4ddc-a676-7a5c71c9f3f6" (UID: "96680c84-52a5-4ddc-a676-7a5c71c9f3f6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:48:32 crc kubenswrapper[4833]: I0217 13:48:32.457881 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96680c84-52a5-4ddc-a676-7a5c71c9f3f6-kube-api-access-69d5t" (OuterVolumeSpecName: "kube-api-access-69d5t") pod "96680c84-52a5-4ddc-a676-7a5c71c9f3f6" (UID: "96680c84-52a5-4ddc-a676-7a5c71c9f3f6"). InnerVolumeSpecName "kube-api-access-69d5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:48:32 crc kubenswrapper[4833]: I0217 13:48:32.471768 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7wzzq" Feb 17 13:48:32 crc kubenswrapper[4833]: I0217 13:48:32.490723 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96680c84-52a5-4ddc-a676-7a5c71c9f3f6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "96680c84-52a5-4ddc-a676-7a5c71c9f3f6" (UID: "96680c84-52a5-4ddc-a676-7a5c71c9f3f6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:48:32 crc kubenswrapper[4833]: I0217 13:48:32.533531 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69d5t\" (UniqueName: \"kubernetes.io/projected/96680c84-52a5-4ddc-a676-7a5c71c9f3f6-kube-api-access-69d5t\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:32 crc kubenswrapper[4833]: I0217 13:48:32.533578 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96680c84-52a5-4ddc-a676-7a5c71c9f3f6-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:32 crc kubenswrapper[4833]: I0217 13:48:32.533591 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96680c84-52a5-4ddc-a676-7a5c71c9f3f6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:32 crc kubenswrapper[4833]: I0217 13:48:32.880260 4833 generic.go:334] "Generic (PLEG): container finished" podID="96680c84-52a5-4ddc-a676-7a5c71c9f3f6" containerID="83dc4755f0c27cc77086682434a7b7710a03e8b46f7fd67b78d51b3ae7781606" exitCode=0 Feb 17 13:48:32 crc kubenswrapper[4833]: I0217 13:48:32.880348 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wrrv5" event={"ID":"96680c84-52a5-4ddc-a676-7a5c71c9f3f6","Type":"ContainerDied","Data":"83dc4755f0c27cc77086682434a7b7710a03e8b46f7fd67b78d51b3ae7781606"} Feb 17 13:48:32 crc kubenswrapper[4833]: I0217 13:48:32.880418 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wrrv5" event={"ID":"96680c84-52a5-4ddc-a676-7a5c71c9f3f6","Type":"ContainerDied","Data":"5e8399f605bb6a9d9c9b851450ff36be3a87a9e9b89d8058637c48104c84dc95"} Feb 17 13:48:32 crc kubenswrapper[4833]: I0217 13:48:32.880451 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wrrv5" Feb 17 13:48:32 crc kubenswrapper[4833]: I0217 13:48:32.880457 4833 scope.go:117] "RemoveContainer" containerID="83dc4755f0c27cc77086682434a7b7710a03e8b46f7fd67b78d51b3ae7781606" Feb 17 13:48:32 crc kubenswrapper[4833]: I0217 13:48:32.908848 4833 scope.go:117] "RemoveContainer" containerID="9d86cffc8b8a150c3cbae23a12e369502ff3ca9dd5fa64fd7cf98e558a72e37c" Feb 17 13:48:32 crc kubenswrapper[4833]: I0217 13:48:32.934079 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wrrv5"] Feb 17 13:48:32 crc kubenswrapper[4833]: I0217 13:48:32.939809 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wrrv5"] Feb 17 13:48:32 crc kubenswrapper[4833]: I0217 13:48:32.944933 4833 scope.go:117] "RemoveContainer" containerID="5c480d9fed2ed20522805aeb85c571d15fe8f2975e6a07c1829c19b5cb6205fc" Feb 17 13:48:32 crc kubenswrapper[4833]: I0217 13:48:32.974624 4833 scope.go:117] "RemoveContainer" containerID="83dc4755f0c27cc77086682434a7b7710a03e8b46f7fd67b78d51b3ae7781606" Feb 17 13:48:32 crc kubenswrapper[4833]: E0217 13:48:32.975320 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83dc4755f0c27cc77086682434a7b7710a03e8b46f7fd67b78d51b3ae7781606\": container with ID starting with 83dc4755f0c27cc77086682434a7b7710a03e8b46f7fd67b78d51b3ae7781606 not found: ID does not exist" containerID="83dc4755f0c27cc77086682434a7b7710a03e8b46f7fd67b78d51b3ae7781606" Feb 17 13:48:32 crc kubenswrapper[4833]: I0217 13:48:32.975650 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83dc4755f0c27cc77086682434a7b7710a03e8b46f7fd67b78d51b3ae7781606"} err="failed to get container status \"83dc4755f0c27cc77086682434a7b7710a03e8b46f7fd67b78d51b3ae7781606\": rpc error: code = NotFound desc = could not find container \"83dc4755f0c27cc77086682434a7b7710a03e8b46f7fd67b78d51b3ae7781606\": container with ID starting with 83dc4755f0c27cc77086682434a7b7710a03e8b46f7fd67b78d51b3ae7781606 not found: ID does not exist" Feb 17 13:48:32 crc kubenswrapper[4833]: I0217 13:48:32.975821 4833 scope.go:117] "RemoveContainer" containerID="9d86cffc8b8a150c3cbae23a12e369502ff3ca9dd5fa64fd7cf98e558a72e37c" Feb 17 13:48:32 crc kubenswrapper[4833]: E0217 13:48:32.976564 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d86cffc8b8a150c3cbae23a12e369502ff3ca9dd5fa64fd7cf98e558a72e37c\": container with ID starting with 9d86cffc8b8a150c3cbae23a12e369502ff3ca9dd5fa64fd7cf98e558a72e37c not found: ID does not exist" containerID="9d86cffc8b8a150c3cbae23a12e369502ff3ca9dd5fa64fd7cf98e558a72e37c" Feb 17 13:48:32 crc kubenswrapper[4833]: I0217 13:48:32.976743 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d86cffc8b8a150c3cbae23a12e369502ff3ca9dd5fa64fd7cf98e558a72e37c"} err="failed to get container status \"9d86cffc8b8a150c3cbae23a12e369502ff3ca9dd5fa64fd7cf98e558a72e37c\": rpc error: code = NotFound desc = could not find container \"9d86cffc8b8a150c3cbae23a12e369502ff3ca9dd5fa64fd7cf98e558a72e37c\": container with ID starting with 9d86cffc8b8a150c3cbae23a12e369502ff3ca9dd5fa64fd7cf98e558a72e37c not found: ID does not exist" Feb 17 13:48:32 crc kubenswrapper[4833]: I0217 13:48:32.976895 4833 scope.go:117] "RemoveContainer" containerID="5c480d9fed2ed20522805aeb85c571d15fe8f2975e6a07c1829c19b5cb6205fc" Feb 17 13:48:32 crc kubenswrapper[4833]: E0217 13:48:32.977598 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c480d9fed2ed20522805aeb85c571d15fe8f2975e6a07c1829c19b5cb6205fc\": container with ID starting with 5c480d9fed2ed20522805aeb85c571d15fe8f2975e6a07c1829c19b5cb6205fc not found: ID does not exist" containerID="5c480d9fed2ed20522805aeb85c571d15fe8f2975e6a07c1829c19b5cb6205fc" Feb 17 13:48:32 crc kubenswrapper[4833]: I0217 13:48:32.977628 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c480d9fed2ed20522805aeb85c571d15fe8f2975e6a07c1829c19b5cb6205fc"} err="failed to get container status \"5c480d9fed2ed20522805aeb85c571d15fe8f2975e6a07c1829c19b5cb6205fc\": rpc error: code = NotFound desc = could not find container \"5c480d9fed2ed20522805aeb85c571d15fe8f2975e6a07c1829c19b5cb6205fc\": container with ID starting with 5c480d9fed2ed20522805aeb85c571d15fe8f2975e6a07c1829c19b5cb6205fc not found: ID does not exist" Feb 17 13:48:33 crc kubenswrapper[4833]: I0217 13:48:33.054693 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73094c1e-9f36-472d-9275-12562b4cd250" path="/var/lib/kubelet/pods/73094c1e-9f36-472d-9275-12562b4cd250/volumes" Feb 17 13:48:33 crc kubenswrapper[4833]: I0217 13:48:33.056426 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96680c84-52a5-4ddc-a676-7a5c71c9f3f6" path="/var/lib/kubelet/pods/96680c84-52a5-4ddc-a676-7a5c71c9f3f6/volumes" Feb 17 13:48:35 crc kubenswrapper[4833]: I0217 13:48:35.467645 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7wzzq"] Feb 17 13:48:35 crc kubenswrapper[4833]: I0217 13:48:35.468423 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7wzzq" podUID="347afd5b-1c04-4364-87ef-82bf98a454a6" containerName="registry-server" containerID="cri-o://f603e968e735f573c8b472dba70d83d5f689d9b72f10d70d45e44c642fad7584" gracePeriod=2 Feb 17 13:48:35 crc kubenswrapper[4833]: I0217 13:48:35.914088 4833 generic.go:334] "Generic (PLEG): container finished" podID="347afd5b-1c04-4364-87ef-82bf98a454a6" containerID="f603e968e735f573c8b472dba70d83d5f689d9b72f10d70d45e44c642fad7584" exitCode=0 Feb 17 13:48:35 crc kubenswrapper[4833]: I0217 13:48:35.914531 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7wzzq" event={"ID":"347afd5b-1c04-4364-87ef-82bf98a454a6","Type":"ContainerDied","Data":"f603e968e735f573c8b472dba70d83d5f689d9b72f10d70d45e44c642fad7584"} Feb 17 13:48:36 crc kubenswrapper[4833]: I0217 13:48:36.034255 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7wzzq" Feb 17 13:48:36 crc kubenswrapper[4833]: I0217 13:48:36.179126 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/347afd5b-1c04-4364-87ef-82bf98a454a6-catalog-content\") pod \"347afd5b-1c04-4364-87ef-82bf98a454a6\" (UID: \"347afd5b-1c04-4364-87ef-82bf98a454a6\") " Feb 17 13:48:36 crc kubenswrapper[4833]: I0217 13:48:36.179286 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7hnk\" (UniqueName: \"kubernetes.io/projected/347afd5b-1c04-4364-87ef-82bf98a454a6-kube-api-access-b7hnk\") pod \"347afd5b-1c04-4364-87ef-82bf98a454a6\" (UID: \"347afd5b-1c04-4364-87ef-82bf98a454a6\") " Feb 17 13:48:36 crc kubenswrapper[4833]: I0217 13:48:36.179412 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/347afd5b-1c04-4364-87ef-82bf98a454a6-utilities\") pod \"347afd5b-1c04-4364-87ef-82bf98a454a6\" (UID: \"347afd5b-1c04-4364-87ef-82bf98a454a6\") " Feb 17 13:48:36 crc kubenswrapper[4833]: I0217 13:48:36.180731 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/347afd5b-1c04-4364-87ef-82bf98a454a6-utilities" (OuterVolumeSpecName: "utilities") pod "347afd5b-1c04-4364-87ef-82bf98a454a6" (UID: "347afd5b-1c04-4364-87ef-82bf98a454a6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:48:36 crc kubenswrapper[4833]: I0217 13:48:36.188644 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/347afd5b-1c04-4364-87ef-82bf98a454a6-kube-api-access-b7hnk" (OuterVolumeSpecName: "kube-api-access-b7hnk") pod "347afd5b-1c04-4364-87ef-82bf98a454a6" (UID: "347afd5b-1c04-4364-87ef-82bf98a454a6"). InnerVolumeSpecName "kube-api-access-b7hnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:48:36 crc kubenswrapper[4833]: I0217 13:48:36.281811 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7hnk\" (UniqueName: \"kubernetes.io/projected/347afd5b-1c04-4364-87ef-82bf98a454a6-kube-api-access-b7hnk\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:36 crc kubenswrapper[4833]: I0217 13:48:36.281874 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/347afd5b-1c04-4364-87ef-82bf98a454a6-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:36 crc kubenswrapper[4833]: I0217 13:48:36.365439 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/347afd5b-1c04-4364-87ef-82bf98a454a6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "347afd5b-1c04-4364-87ef-82bf98a454a6" (UID: "347afd5b-1c04-4364-87ef-82bf98a454a6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:48:36 crc kubenswrapper[4833]: I0217 13:48:36.383031 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/347afd5b-1c04-4364-87ef-82bf98a454a6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:36 crc kubenswrapper[4833]: I0217 13:48:36.923015 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7wzzq" event={"ID":"347afd5b-1c04-4364-87ef-82bf98a454a6","Type":"ContainerDied","Data":"c7cc0754aa64e209a160de6b1d2a9d3f6df12e10f16afefc6f3c0de8e6b2bf4a"} Feb 17 13:48:36 crc kubenswrapper[4833]: I0217 13:48:36.923483 4833 scope.go:117] "RemoveContainer" containerID="f603e968e735f573c8b472dba70d83d5f689d9b72f10d70d45e44c642fad7584" Feb 17 13:48:36 crc kubenswrapper[4833]: I0217 13:48:36.923128 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7wzzq" Feb 17 13:48:36 crc kubenswrapper[4833]: I0217 13:48:36.961410 4833 scope.go:117] "RemoveContainer" containerID="d455b5d382209909fc57cefffdb0c4bbd5b256b212a1ec9f75ebd52ef9c078c5" Feb 17 13:48:36 crc kubenswrapper[4833]: I0217 13:48:36.964704 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7wzzq"] Feb 17 13:48:36 crc kubenswrapper[4833]: I0217 13:48:36.970161 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7wzzq"] Feb 17 13:48:36 crc kubenswrapper[4833]: I0217 13:48:36.989862 4833 scope.go:117] "RemoveContainer" containerID="d7fd28fe7009a80f5d1c94ae718482dc734408e40b441f9bbdb0700df17a910d" Feb 17 13:48:37 crc kubenswrapper[4833]: I0217 13:48:37.048702 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="347afd5b-1c04-4364-87ef-82bf98a454a6" path="/var/lib/kubelet/pods/347afd5b-1c04-4364-87ef-82bf98a454a6/volumes" Feb 17 13:48:38 crc kubenswrapper[4833]: I0217 13:48:38.725259 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-77c6f5cf88-4g5jk"] Feb 17 13:48:38 crc kubenswrapper[4833]: I0217 13:48:38.725779 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-77c6f5cf88-4g5jk" podUID="41127460-87b7-47b4-81c2-cebc7667e185" containerName="controller-manager" containerID="cri-o://c7a953bcc07cdfb8d1d6b38264b598ecbc66b060ebdec3ece6938c4b14c688f1" gracePeriod=30 Feb 17 13:48:38 crc kubenswrapper[4833]: I0217 13:48:38.737318 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bb5547dcd-srn8j"] Feb 17 13:48:38 crc kubenswrapper[4833]: I0217 13:48:38.737516 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6bb5547dcd-srn8j" podUID="60a2c4e5-b6e2-4c5c-b6fd-772e6ce28858" containerName="route-controller-manager" containerID="cri-o://488ec823a87ffbc6a821a73f6be0a1f7cdf1cda7f628fb30d795893fdd2ac57d" gracePeriod=30 Feb 17 13:48:38 crc kubenswrapper[4833]: I0217 13:48:38.938323 4833 generic.go:334] "Generic (PLEG): container finished" podID="60a2c4e5-b6e2-4c5c-b6fd-772e6ce28858" containerID="488ec823a87ffbc6a821a73f6be0a1f7cdf1cda7f628fb30d795893fdd2ac57d" exitCode=0 Feb 17 13:48:38 crc kubenswrapper[4833]: I0217 13:48:38.938390 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bb5547dcd-srn8j" event={"ID":"60a2c4e5-b6e2-4c5c-b6fd-772e6ce28858","Type":"ContainerDied","Data":"488ec823a87ffbc6a821a73f6be0a1f7cdf1cda7f628fb30d795893fdd2ac57d"} Feb 17 13:48:38 crc kubenswrapper[4833]: I0217 13:48:38.939770 4833 generic.go:334] "Generic (PLEG): container finished" podID="41127460-87b7-47b4-81c2-cebc7667e185" containerID="c7a953bcc07cdfb8d1d6b38264b598ecbc66b060ebdec3ece6938c4b14c688f1" exitCode=0 Feb 17 13:48:38 crc kubenswrapper[4833]: I0217 13:48:38.939794 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77c6f5cf88-4g5jk" event={"ID":"41127460-87b7-47b4-81c2-cebc7667e185","Type":"ContainerDied","Data":"c7a953bcc07cdfb8d1d6b38264b598ecbc66b060ebdec3ece6938c4b14c688f1"} Feb 17 13:48:39 crc kubenswrapper[4833]: I0217 13:48:39.231476 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bb5547dcd-srn8j" Feb 17 13:48:39 crc kubenswrapper[4833]: I0217 13:48:39.308048 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77c6f5cf88-4g5jk" Feb 17 13:48:39 crc kubenswrapper[4833]: I0217 13:48:39.318906 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-csd7c\" (UniqueName: \"kubernetes.io/projected/60a2c4e5-b6e2-4c5c-b6fd-772e6ce28858-kube-api-access-csd7c\") pod \"60a2c4e5-b6e2-4c5c-b6fd-772e6ce28858\" (UID: \"60a2c4e5-b6e2-4c5c-b6fd-772e6ce28858\") " Feb 17 13:48:39 crc kubenswrapper[4833]: I0217 13:48:39.318974 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/60a2c4e5-b6e2-4c5c-b6fd-772e6ce28858-client-ca\") pod \"60a2c4e5-b6e2-4c5c-b6fd-772e6ce28858\" (UID: \"60a2c4e5-b6e2-4c5c-b6fd-772e6ce28858\") " Feb 17 13:48:39 crc kubenswrapper[4833]: I0217 13:48:39.319006 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60a2c4e5-b6e2-4c5c-b6fd-772e6ce28858-config\") pod \"60a2c4e5-b6e2-4c5c-b6fd-772e6ce28858\" (UID: \"60a2c4e5-b6e2-4c5c-b6fd-772e6ce28858\") " Feb 17 13:48:39 crc kubenswrapper[4833]: I0217 13:48:39.319132 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/60a2c4e5-b6e2-4c5c-b6fd-772e6ce28858-serving-cert\") pod \"60a2c4e5-b6e2-4c5c-b6fd-772e6ce28858\" (UID: \"60a2c4e5-b6e2-4c5c-b6fd-772e6ce28858\") " Feb 17 13:48:39 crc kubenswrapper[4833]: I0217 13:48:39.319893 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60a2c4e5-b6e2-4c5c-b6fd-772e6ce28858-config" (OuterVolumeSpecName: "config") pod "60a2c4e5-b6e2-4c5c-b6fd-772e6ce28858" (UID: "60a2c4e5-b6e2-4c5c-b6fd-772e6ce28858"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:48:39 crc kubenswrapper[4833]: I0217 13:48:39.319911 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60a2c4e5-b6e2-4c5c-b6fd-772e6ce28858-client-ca" (OuterVolumeSpecName: "client-ca") pod "60a2c4e5-b6e2-4c5c-b6fd-772e6ce28858" (UID: "60a2c4e5-b6e2-4c5c-b6fd-772e6ce28858"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:48:39 crc kubenswrapper[4833]: I0217 13:48:39.376462 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60a2c4e5-b6e2-4c5c-b6fd-772e6ce28858-kube-api-access-csd7c" (OuterVolumeSpecName: "kube-api-access-csd7c") pod "60a2c4e5-b6e2-4c5c-b6fd-772e6ce28858" (UID: "60a2c4e5-b6e2-4c5c-b6fd-772e6ce28858"). InnerVolumeSpecName "kube-api-access-csd7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:48:39 crc kubenswrapper[4833]: I0217 13:48:39.377174 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60a2c4e5-b6e2-4c5c-b6fd-772e6ce28858-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "60a2c4e5-b6e2-4c5c-b6fd-772e6ce28858" (UID: "60a2c4e5-b6e2-4c5c-b6fd-772e6ce28858"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:48:39 crc kubenswrapper[4833]: I0217 13:48:39.420057 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41127460-87b7-47b4-81c2-cebc7667e185-serving-cert\") pod \"41127460-87b7-47b4-81c2-cebc7667e185\" (UID: \"41127460-87b7-47b4-81c2-cebc7667e185\") " Feb 17 13:48:39 crc kubenswrapper[4833]: I0217 13:48:39.420199 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/41127460-87b7-47b4-81c2-cebc7667e185-proxy-ca-bundles\") pod \"41127460-87b7-47b4-81c2-cebc7667e185\" (UID: \"41127460-87b7-47b4-81c2-cebc7667e185\") " Feb 17 13:48:39 crc kubenswrapper[4833]: I0217 13:48:39.420236 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41127460-87b7-47b4-81c2-cebc7667e185-config\") pod \"41127460-87b7-47b4-81c2-cebc7667e185\" (UID: \"41127460-87b7-47b4-81c2-cebc7667e185\") " Feb 17 13:48:39 crc kubenswrapper[4833]: I0217 13:48:39.420258 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nd9xq\" (UniqueName: \"kubernetes.io/projected/41127460-87b7-47b4-81c2-cebc7667e185-kube-api-access-nd9xq\") pod \"41127460-87b7-47b4-81c2-cebc7667e185\" (UID: \"41127460-87b7-47b4-81c2-cebc7667e185\") " Feb 17 13:48:39 crc kubenswrapper[4833]: I0217 13:48:39.420303 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41127460-87b7-47b4-81c2-cebc7667e185-client-ca\") pod \"41127460-87b7-47b4-81c2-cebc7667e185\" (UID: \"41127460-87b7-47b4-81c2-cebc7667e185\") " Feb 17 13:48:39 crc kubenswrapper[4833]: I0217 13:48:39.420507 4833 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/60a2c4e5-b6e2-4c5c-b6fd-772e6ce28858-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:39 crc kubenswrapper[4833]: I0217 13:48:39.420522 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60a2c4e5-b6e2-4c5c-b6fd-772e6ce28858-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:39 crc kubenswrapper[4833]: I0217 13:48:39.420533 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/60a2c4e5-b6e2-4c5c-b6fd-772e6ce28858-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:39 crc kubenswrapper[4833]: I0217 13:48:39.420543 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-csd7c\" (UniqueName: \"kubernetes.io/projected/60a2c4e5-b6e2-4c5c-b6fd-772e6ce28858-kube-api-access-csd7c\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:39 crc kubenswrapper[4833]: I0217 13:48:39.420942 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41127460-87b7-47b4-81c2-cebc7667e185-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "41127460-87b7-47b4-81c2-cebc7667e185" (UID: "41127460-87b7-47b4-81c2-cebc7667e185"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:48:39 crc kubenswrapper[4833]: I0217 13:48:39.420967 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41127460-87b7-47b4-81c2-cebc7667e185-client-ca" (OuterVolumeSpecName: "client-ca") pod "41127460-87b7-47b4-81c2-cebc7667e185" (UID: "41127460-87b7-47b4-81c2-cebc7667e185"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:48:39 crc kubenswrapper[4833]: I0217 13:48:39.421000 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41127460-87b7-47b4-81c2-cebc7667e185-config" (OuterVolumeSpecName: "config") pod "41127460-87b7-47b4-81c2-cebc7667e185" (UID: "41127460-87b7-47b4-81c2-cebc7667e185"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:48:39 crc kubenswrapper[4833]: I0217 13:48:39.422692 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41127460-87b7-47b4-81c2-cebc7667e185-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "41127460-87b7-47b4-81c2-cebc7667e185" (UID: "41127460-87b7-47b4-81c2-cebc7667e185"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:48:39 crc kubenswrapper[4833]: I0217 13:48:39.424397 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41127460-87b7-47b4-81c2-cebc7667e185-kube-api-access-nd9xq" (OuterVolumeSpecName: "kube-api-access-nd9xq") pod "41127460-87b7-47b4-81c2-cebc7667e185" (UID: "41127460-87b7-47b4-81c2-cebc7667e185"). InnerVolumeSpecName "kube-api-access-nd9xq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:48:39 crc kubenswrapper[4833]: I0217 13:48:39.522178 4833 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41127460-87b7-47b4-81c2-cebc7667e185-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:39 crc kubenswrapper[4833]: I0217 13:48:39.522223 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41127460-87b7-47b4-81c2-cebc7667e185-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:39 crc kubenswrapper[4833]: I0217 13:48:39.522241 4833 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/41127460-87b7-47b4-81c2-cebc7667e185-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:39 crc kubenswrapper[4833]: I0217 13:48:39.522261 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41127460-87b7-47b4-81c2-cebc7667e185-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:39 crc kubenswrapper[4833]: I0217 13:48:39.522282 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nd9xq\" (UniqueName: \"kubernetes.io/projected/41127460-87b7-47b4-81c2-cebc7667e185-kube-api-access-nd9xq\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:39 crc kubenswrapper[4833]: I0217 13:48:39.950589 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77c6f5cf88-4g5jk" event={"ID":"41127460-87b7-47b4-81c2-cebc7667e185","Type":"ContainerDied","Data":"8d9ac39092e9b9e157fd80a1363cd4fdfbd1b523ce1cf4db947e76423591ff93"} Feb 17 13:48:39 crc kubenswrapper[4833]: I0217 13:48:39.950606 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77c6f5cf88-4g5jk" Feb 17 13:48:39 crc kubenswrapper[4833]: I0217 13:48:39.950671 4833 scope.go:117] "RemoveContainer" containerID="c7a953bcc07cdfb8d1d6b38264b598ecbc66b060ebdec3ece6938c4b14c688f1" Feb 17 13:48:39 crc kubenswrapper[4833]: I0217 13:48:39.955608 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bb5547dcd-srn8j" event={"ID":"60a2c4e5-b6e2-4c5c-b6fd-772e6ce28858","Type":"ContainerDied","Data":"7c4b5cbb847506afcb8b6207a7eaac3de2bb1642a6d5357b83addd0dc603e7da"} Feb 17 13:48:39 crc kubenswrapper[4833]: I0217 13:48:39.955739 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bb5547dcd-srn8j" Feb 17 13:48:39 crc kubenswrapper[4833]: I0217 13:48:39.977644 4833 scope.go:117] "RemoveContainer" containerID="488ec823a87ffbc6a821a73f6be0a1f7cdf1cda7f628fb30d795893fdd2ac57d" Feb 17 13:48:40 crc kubenswrapper[4833]: I0217 13:48:40.007762 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-77c6f5cf88-4g5jk"] Feb 17 13:48:40 crc kubenswrapper[4833]: I0217 13:48:40.016191 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-77c6f5cf88-4g5jk"] Feb 17 13:48:40 crc kubenswrapper[4833]: I0217 13:48:40.020216 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bb5547dcd-srn8j"] Feb 17 13:48:40 crc kubenswrapper[4833]: I0217 13:48:40.023147 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bb5547dcd-srn8j"] Feb 17 13:48:40 crc kubenswrapper[4833]: I0217 13:48:40.046237 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b59fdff6d-5m6zf"] Feb 17 13:48:40 crc kubenswrapper[4833]: E0217 13:48:40.046615 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73094c1e-9f36-472d-9275-12562b4cd250" containerName="extract-content" Feb 17 13:48:40 crc kubenswrapper[4833]: I0217 13:48:40.046643 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="73094c1e-9f36-472d-9275-12562b4cd250" containerName="extract-content" Feb 17 13:48:40 crc kubenswrapper[4833]: E0217 13:48:40.046660 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="347afd5b-1c04-4364-87ef-82bf98a454a6" containerName="extract-content" Feb 17 13:48:40 crc kubenswrapper[4833]: I0217 13:48:40.046672 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="347afd5b-1c04-4364-87ef-82bf98a454a6" containerName="extract-content" Feb 17 13:48:40 crc kubenswrapper[4833]: E0217 13:48:40.046686 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73094c1e-9f36-472d-9275-12562b4cd250" containerName="registry-server" Feb 17 13:48:40 crc kubenswrapper[4833]: I0217 13:48:40.046697 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="73094c1e-9f36-472d-9275-12562b4cd250" containerName="registry-server" Feb 17 13:48:40 crc kubenswrapper[4833]: E0217 13:48:40.046709 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96680c84-52a5-4ddc-a676-7a5c71c9f3f6" containerName="registry-server" Feb 17 13:48:40 crc kubenswrapper[4833]: I0217 13:48:40.046721 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="96680c84-52a5-4ddc-a676-7a5c71c9f3f6" containerName="registry-server" Feb 17 13:48:40 crc kubenswrapper[4833]: E0217 13:48:40.046734 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96680c84-52a5-4ddc-a676-7a5c71c9f3f6" containerName="extract-utilities" Feb 17 13:48:40 crc kubenswrapper[4833]: I0217 13:48:40.046745 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="96680c84-52a5-4ddc-a676-7a5c71c9f3f6" containerName="extract-utilities" Feb 17 13:48:40 crc kubenswrapper[4833]: E0217 13:48:40.046758 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41127460-87b7-47b4-81c2-cebc7667e185" containerName="controller-manager" Feb 17 13:48:40 crc kubenswrapper[4833]: I0217 13:48:40.046768 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="41127460-87b7-47b4-81c2-cebc7667e185" containerName="controller-manager" Feb 17 13:48:40 crc kubenswrapper[4833]: E0217 13:48:40.046785 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="347afd5b-1c04-4364-87ef-82bf98a454a6" containerName="registry-server" Feb 17 13:48:40 crc kubenswrapper[4833]: I0217 13:48:40.046796 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="347afd5b-1c04-4364-87ef-82bf98a454a6" containerName="registry-server" Feb 17 13:48:40 crc kubenswrapper[4833]: E0217 13:48:40.046808 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96680c84-52a5-4ddc-a676-7a5c71c9f3f6" containerName="extract-content" Feb 17 13:48:40 crc kubenswrapper[4833]: I0217 13:48:40.046818 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="96680c84-52a5-4ddc-a676-7a5c71c9f3f6" containerName="extract-content" Feb 17 13:48:40 crc kubenswrapper[4833]: E0217 13:48:40.046834 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73094c1e-9f36-472d-9275-12562b4cd250" containerName="extract-utilities" Feb 17 13:48:40 crc kubenswrapper[4833]: I0217 13:48:40.046845 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="73094c1e-9f36-472d-9275-12562b4cd250" containerName="extract-utilities" Feb 17 13:48:40 crc kubenswrapper[4833]: E0217 13:48:40.046866 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="347afd5b-1c04-4364-87ef-82bf98a454a6" containerName="extract-utilities" Feb 17 13:48:40 crc kubenswrapper[4833]: I0217 13:48:40.046875 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="347afd5b-1c04-4364-87ef-82bf98a454a6" containerName="extract-utilities" Feb 17 13:48:40 crc kubenswrapper[4833]: E0217 13:48:40.046890 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60a2c4e5-b6e2-4c5c-b6fd-772e6ce28858" containerName="route-controller-manager" Feb 17 13:48:40 crc kubenswrapper[4833]: I0217 13:48:40.046900 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="60a2c4e5-b6e2-4c5c-b6fd-772e6ce28858" containerName="route-controller-manager" Feb 17 13:48:40 crc kubenswrapper[4833]: I0217 13:48:40.047083 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="41127460-87b7-47b4-81c2-cebc7667e185" containerName="controller-manager" Feb 17 13:48:40 crc kubenswrapper[4833]: I0217 13:48:40.047104 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="347afd5b-1c04-4364-87ef-82bf98a454a6" containerName="registry-server" Feb 17 13:48:40 crc kubenswrapper[4833]: I0217 13:48:40.047121 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="73094c1e-9f36-472d-9275-12562b4cd250" containerName="registry-server" Feb 17 13:48:40 crc kubenswrapper[4833]: I0217 13:48:40.047140 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="60a2c4e5-b6e2-4c5c-b6fd-772e6ce28858" containerName="route-controller-manager" Feb 17 13:48:40 crc kubenswrapper[4833]: I0217 13:48:40.047153 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="96680c84-52a5-4ddc-a676-7a5c71c9f3f6" containerName="registry-server" Feb 17 13:48:40 crc kubenswrapper[4833]: I0217 13:48:40.047587 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-54c9786465-52n82"] Feb 17 13:48:40 crc kubenswrapper[4833]: I0217 13:48:40.047751 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b59fdff6d-5m6zf" Feb 17 13:48:40 crc kubenswrapper[4833]: I0217 13:48:40.048840 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-54c9786465-52n82" Feb 17 13:48:40 crc kubenswrapper[4833]: I0217 13:48:40.053908 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 17 13:48:40 crc kubenswrapper[4833]: I0217 13:48:40.054081 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 17 13:48:40 crc kubenswrapper[4833]: I0217 13:48:40.054284 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 17 13:48:40 crc kubenswrapper[4833]: I0217 13:48:40.054619 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 17 13:48:40 crc kubenswrapper[4833]: I0217 13:48:40.054941 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 17 13:48:40 crc kubenswrapper[4833]: I0217 13:48:40.055197 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 17 13:48:40 crc kubenswrapper[4833]: I0217 13:48:40.055792 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 17 13:48:40 crc kubenswrapper[4833]: I0217 13:48:40.056215 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 17 13:48:40 crc kubenswrapper[4833]: I0217 13:48:40.056493 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 17 13:48:40 crc kubenswrapper[4833]: I0217 13:48:40.056570 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 17 13:48:40 crc kubenswrapper[4833]: I0217 13:48:40.056737 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 17 13:48:40 crc kubenswrapper[4833]: I0217 13:48:40.056795 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 17 13:48:40 crc kubenswrapper[4833]: I0217 13:48:40.065533 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-54c9786465-52n82"] Feb 17 13:48:40 crc kubenswrapper[4833]: I0217 13:48:40.068732 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b59fdff6d-5m6zf"] Feb 17 13:48:40 crc kubenswrapper[4833]: I0217 13:48:40.069717 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 17 13:48:40 crc kubenswrapper[4833]: I0217 13:48:40.130334 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbjcc\" (UniqueName: \"kubernetes.io/projected/9117597c-5f66-4616-a802-762135377474-kube-api-access-cbjcc\") pod \"route-controller-manager-7b59fdff6d-5m6zf\" (UID: \"9117597c-5f66-4616-a802-762135377474\") " pod="openshift-route-controller-manager/route-controller-manager-7b59fdff6d-5m6zf" Feb 17 13:48:40 crc kubenswrapper[4833]: I0217 13:48:40.130487 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c1f6924f-3afd-49b8-80b8-d3df9c59ca19-proxy-ca-bundles\") pod \"controller-manager-54c9786465-52n82\" (UID: \"c1f6924f-3afd-49b8-80b8-d3df9c59ca19\") " pod="openshift-controller-manager/controller-manager-54c9786465-52n82" Feb 17 13:48:40 crc kubenswrapper[4833]: I0217 13:48:40.130569 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k59df\" (UniqueName: \"kubernetes.io/projected/c1f6924f-3afd-49b8-80b8-d3df9c59ca19-kube-api-access-k59df\") pod \"controller-manager-54c9786465-52n82\" (UID: \"c1f6924f-3afd-49b8-80b8-d3df9c59ca19\") " pod="openshift-controller-manager/controller-manager-54c9786465-52n82" Feb 17 13:48:40 crc kubenswrapper[4833]: I0217 13:48:40.130599 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9117597c-5f66-4616-a802-762135377474-config\") pod \"route-controller-manager-7b59fdff6d-5m6zf\" (UID: \"9117597c-5f66-4616-a802-762135377474\") " pod="openshift-route-controller-manager/route-controller-manager-7b59fdff6d-5m6zf" Feb 17 13:48:40 crc kubenswrapper[4833]: I0217 13:48:40.130667 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1f6924f-3afd-49b8-80b8-d3df9c59ca19-serving-cert\") pod \"controller-manager-54c9786465-52n82\" (UID: \"c1f6924f-3afd-49b8-80b8-d3df9c59ca19\") " pod="openshift-controller-manager/controller-manager-54c9786465-52n82" Feb 17 13:48:40 crc kubenswrapper[4833]: I0217 13:48:40.130700 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9117597c-5f66-4616-a802-762135377474-serving-cert\") pod \"route-controller-manager-7b59fdff6d-5m6zf\" (UID: \"9117597c-5f66-4616-a802-762135377474\") " pod="openshift-route-controller-manager/route-controller-manager-7b59fdff6d-5m6zf" Feb 17 13:48:40 crc kubenswrapper[4833]: I0217 13:48:40.130767 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1f6924f-3afd-49b8-80b8-d3df9c59ca19-config\") pod \"controller-manager-54c9786465-52n82\" (UID: \"c1f6924f-3afd-49b8-80b8-d3df9c59ca19\") " pod="openshift-controller-manager/controller-manager-54c9786465-52n82" Feb 17 13:48:40 crc kubenswrapper[4833]: I0217 13:48:40.130823 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9117597c-5f66-4616-a802-762135377474-client-ca\") pod \"route-controller-manager-7b59fdff6d-5m6zf\" (UID: \"9117597c-5f66-4616-a802-762135377474\") " pod="openshift-route-controller-manager/route-controller-manager-7b59fdff6d-5m6zf" Feb 17 13:48:40 crc kubenswrapper[4833]: I0217 13:48:40.130850 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c1f6924f-3afd-49b8-80b8-d3df9c59ca19-client-ca\") pod \"controller-manager-54c9786465-52n82\" (UID: \"c1f6924f-3afd-49b8-80b8-d3df9c59ca19\") " pod="openshift-controller-manager/controller-manager-54c9786465-52n82" Feb 17 13:48:40 crc kubenswrapper[4833]: I0217 13:48:40.231668 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k59df\" (UniqueName: \"kubernetes.io/projected/c1f6924f-3afd-49b8-80b8-d3df9c59ca19-kube-api-access-k59df\") pod \"controller-manager-54c9786465-52n82\" (UID: \"c1f6924f-3afd-49b8-80b8-d3df9c59ca19\") " pod="openshift-controller-manager/controller-manager-54c9786465-52n82" Feb 17 13:48:40 crc kubenswrapper[4833]: I0217 13:48:40.231738 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9117597c-5f66-4616-a802-762135377474-config\") pod \"route-controller-manager-7b59fdff6d-5m6zf\" (UID: \"9117597c-5f66-4616-a802-762135377474\") " pod="openshift-route-controller-manager/route-controller-manager-7b59fdff6d-5m6zf" Feb 17 13:48:40 crc kubenswrapper[4833]: I0217 13:48:40.231778 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1f6924f-3afd-49b8-80b8-d3df9c59ca19-serving-cert\") pod \"controller-manager-54c9786465-52n82\" (UID: \"c1f6924f-3afd-49b8-80b8-d3df9c59ca19\") " pod="openshift-controller-manager/controller-manager-54c9786465-52n82" Feb 17 13:48:40 crc kubenswrapper[4833]: I0217 13:48:40.231805 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9117597c-5f66-4616-a802-762135377474-serving-cert\") pod \"route-controller-manager-7b59fdff6d-5m6zf\" (UID: \"9117597c-5f66-4616-a802-762135377474\") " pod="openshift-route-controller-manager/route-controller-manager-7b59fdff6d-5m6zf" Feb 17 13:48:40 crc kubenswrapper[4833]: I0217 13:48:40.231843 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1f6924f-3afd-49b8-80b8-d3df9c59ca19-config\") pod \"controller-manager-54c9786465-52n82\" (UID: \"c1f6924f-3afd-49b8-80b8-d3df9c59ca19\") " pod="openshift-controller-manager/controller-manager-54c9786465-52n82" Feb 17 13:48:40 crc kubenswrapper[4833]: I0217 13:48:40.231868 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9117597c-5f66-4616-a802-762135377474-client-ca\") pod \"route-controller-manager-7b59fdff6d-5m6zf\" (UID: \"9117597c-5f66-4616-a802-762135377474\") " pod="openshift-route-controller-manager/route-controller-manager-7b59fdff6d-5m6zf" Feb 17 13:48:40 crc kubenswrapper[4833]: I0217 13:48:40.231891 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c1f6924f-3afd-49b8-80b8-d3df9c59ca19-client-ca\") pod \"controller-manager-54c9786465-52n82\" (UID: \"c1f6924f-3afd-49b8-80b8-d3df9c59ca19\") " pod="openshift-controller-manager/controller-manager-54c9786465-52n82" Feb 17 13:48:40 crc kubenswrapper[4833]: I0217 13:48:40.231918 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbjcc\" (UniqueName: \"kubernetes.io/projected/9117597c-5f66-4616-a802-762135377474-kube-api-access-cbjcc\") pod \"route-controller-manager-7b59fdff6d-5m6zf\" (UID: \"9117597c-5f66-4616-a802-762135377474\") " pod="openshift-route-controller-manager/route-controller-manager-7b59fdff6d-5m6zf" Feb 17 13:48:40 crc kubenswrapper[4833]: I0217 13:48:40.231958 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c1f6924f-3afd-49b8-80b8-d3df9c59ca19-proxy-ca-bundles\") pod \"controller-manager-54c9786465-52n82\" (UID: \"c1f6924f-3afd-49b8-80b8-d3df9c59ca19\") " pod="openshift-controller-manager/controller-manager-54c9786465-52n82" Feb 17 13:48:40 crc kubenswrapper[4833]: I0217 13:48:40.233548 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c1f6924f-3afd-49b8-80b8-d3df9c59ca19-proxy-ca-bundles\") pod \"controller-manager-54c9786465-52n82\" (UID: \"c1f6924f-3afd-49b8-80b8-d3df9c59ca19\") " pod="openshift-controller-manager/controller-manager-54c9786465-52n82" Feb 17 13:48:40 crc kubenswrapper[4833]: I0217 13:48:40.233602 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c1f6924f-3afd-49b8-80b8-d3df9c59ca19-client-ca\") pod \"controller-manager-54c9786465-52n82\" (UID: \"c1f6924f-3afd-49b8-80b8-d3df9c59ca19\") " pod="openshift-controller-manager/controller-manager-54c9786465-52n82" Feb 17 13:48:40 crc kubenswrapper[4833]: I0217 13:48:40.234221 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9117597c-5f66-4616-a802-762135377474-client-ca\") pod \"route-controller-manager-7b59fdff6d-5m6zf\" (UID: \"9117597c-5f66-4616-a802-762135377474\") " pod="openshift-route-controller-manager/route-controller-manager-7b59fdff6d-5m6zf" Feb 17 13:48:40 crc kubenswrapper[4833]: I0217 13:48:40.234349 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1f6924f-3afd-49b8-80b8-d3df9c59ca19-config\") pod \"controller-manager-54c9786465-52n82\" (UID: \"c1f6924f-3afd-49b8-80b8-d3df9c59ca19\") " pod="openshift-controller-manager/controller-manager-54c9786465-52n82" Feb 17 13:48:40 crc kubenswrapper[4833]: I0217 13:48:40.235955 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9117597c-5f66-4616-a802-762135377474-config\") pod \"route-controller-manager-7b59fdff6d-5m6zf\" (UID: \"9117597c-5f66-4616-a802-762135377474\") " pod="openshift-route-controller-manager/route-controller-manager-7b59fdff6d-5m6zf" Feb 17 13:48:40 crc kubenswrapper[4833]: I0217 13:48:40.237636 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1f6924f-3afd-49b8-80b8-d3df9c59ca19-serving-cert\") pod \"controller-manager-54c9786465-52n82\" (UID: \"c1f6924f-3afd-49b8-80b8-d3df9c59ca19\") " pod="openshift-controller-manager/controller-manager-54c9786465-52n82" Feb 17 13:48:40 crc kubenswrapper[4833]: I0217 13:48:40.248333 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9117597c-5f66-4616-a802-762135377474-serving-cert\") pod \"route-controller-manager-7b59fdff6d-5m6zf\" (UID: \"9117597c-5f66-4616-a802-762135377474\") " pod="openshift-route-controller-manager/route-controller-manager-7b59fdff6d-5m6zf" Feb 17 13:48:40 crc kubenswrapper[4833]: I0217 13:48:40.251529 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k59df\" (UniqueName: \"kubernetes.io/projected/c1f6924f-3afd-49b8-80b8-d3df9c59ca19-kube-api-access-k59df\") pod \"controller-manager-54c9786465-52n82\" (UID: \"c1f6924f-3afd-49b8-80b8-d3df9c59ca19\") " pod="openshift-controller-manager/controller-manager-54c9786465-52n82" Feb 17 13:48:40 crc kubenswrapper[4833]: I0217 13:48:40.252612 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbjcc\" (UniqueName: \"kubernetes.io/projected/9117597c-5f66-4616-a802-762135377474-kube-api-access-cbjcc\") pod \"route-controller-manager-7b59fdff6d-5m6zf\" (UID: \"9117597c-5f66-4616-a802-762135377474\") " pod="openshift-route-controller-manager/route-controller-manager-7b59fdff6d-5m6zf" Feb 17 13:48:40 crc kubenswrapper[4833]: I0217 13:48:40.370676 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b59fdff6d-5m6zf" Feb 17 13:48:40 crc kubenswrapper[4833]: I0217 13:48:40.387499 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-54c9786465-52n82" Feb 17 13:48:40 crc kubenswrapper[4833]: I0217 13:48:40.702833 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-54c9786465-52n82"] Feb 17 13:48:40 crc kubenswrapper[4833]: W0217 13:48:40.712100 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1f6924f_3afd_49b8_80b8_d3df9c59ca19.slice/crio-ae4f1e1687ed5b357d60c9231d0dacece5430c8c55344997fc2c59b134f91e6e WatchSource:0}: Error finding container ae4f1e1687ed5b357d60c9231d0dacece5430c8c55344997fc2c59b134f91e6e: Status 404 returned error can't find the container with id ae4f1e1687ed5b357d60c9231d0dacece5430c8c55344997fc2c59b134f91e6e Feb 17 13:48:40 crc kubenswrapper[4833]: I0217 13:48:40.827750 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b59fdff6d-5m6zf"] Feb 17 13:48:40 crc kubenswrapper[4833]: W0217 13:48:40.835434 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9117597c_5f66_4616_a802_762135377474.slice/crio-1dd6e95ff4afb62aff94976a14e96c26ead66b5e1958022adb8af61305934850 WatchSource:0}: Error finding container 1dd6e95ff4afb62aff94976a14e96c26ead66b5e1958022adb8af61305934850: Status 404 returned error can't find the container with id 1dd6e95ff4afb62aff94976a14e96c26ead66b5e1958022adb8af61305934850 Feb 17 13:48:40 crc kubenswrapper[4833]: I0217 13:48:40.964640 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54c9786465-52n82" event={"ID":"c1f6924f-3afd-49b8-80b8-d3df9c59ca19","Type":"ContainerStarted","Data":"e0cb27adefcedaf5df509bde426d540be97a7a6d464b1eb046e2c1b1e0a0308d"} Feb 17 13:48:40 crc kubenswrapper[4833]: I0217 13:48:40.964971 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54c9786465-52n82" event={"ID":"c1f6924f-3afd-49b8-80b8-d3df9c59ca19","Type":"ContainerStarted","Data":"ae4f1e1687ed5b357d60c9231d0dacece5430c8c55344997fc2c59b134f91e6e"} Feb 17 13:48:40 crc kubenswrapper[4833]: I0217 13:48:40.964988 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-54c9786465-52n82" Feb 17 13:48:40 crc kubenswrapper[4833]: I0217 13:48:40.965929 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b59fdff6d-5m6zf" event={"ID":"9117597c-5f66-4616-a802-762135377474","Type":"ContainerStarted","Data":"1dd6e95ff4afb62aff94976a14e96c26ead66b5e1958022adb8af61305934850"} Feb 17 13:48:40 crc kubenswrapper[4833]: I0217 13:48:40.970943 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-54c9786465-52n82" Feb 17 13:48:40 crc kubenswrapper[4833]: I0217 13:48:40.984218 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-54c9786465-52n82" podStartSLOduration=2.984200517 podStartE2EDuration="2.984200517s" podCreationTimestamp="2026-02-17 13:48:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:48:40.982174644 +0000 UTC m=+210.617274097" watchObservedRunningTime="2026-02-17 13:48:40.984200517 +0000 UTC m=+210.619299950" Feb 17 13:48:41 crc kubenswrapper[4833]: I0217 13:48:41.048396 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41127460-87b7-47b4-81c2-cebc7667e185" path="/var/lib/kubelet/pods/41127460-87b7-47b4-81c2-cebc7667e185/volumes" Feb 17 13:48:41 crc kubenswrapper[4833]: I0217 13:48:41.049002 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60a2c4e5-b6e2-4c5c-b6fd-772e6ce28858" path="/var/lib/kubelet/pods/60a2c4e5-b6e2-4c5c-b6fd-772e6ce28858/volumes" Feb 17 13:48:41 crc kubenswrapper[4833]: I0217 13:48:41.975455 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b59fdff6d-5m6zf" event={"ID":"9117597c-5f66-4616-a802-762135377474","Type":"ContainerStarted","Data":"3cbf6e0cff5bda1df20b3621fc8ceb215bbe0f8e6c71afd15ec1a399fd421daa"} Feb 17 13:48:42 crc kubenswrapper[4833]: I0217 13:48:42.980384 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7b59fdff6d-5m6zf" Feb 17 13:48:42 crc kubenswrapper[4833]: I0217 13:48:42.988020 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7b59fdff6d-5m6zf" Feb 17 13:48:43 crc kubenswrapper[4833]: I0217 13:48:43.007462 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7b59fdff6d-5m6zf" podStartSLOduration=5.007444256 podStartE2EDuration="5.007444256s" podCreationTimestamp="2026-02-17 13:48:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:48:41.996848849 +0000 UTC m=+211.631948282" watchObservedRunningTime="2026-02-17 13:48:43.007444256 +0000 UTC m=+212.642543699" Feb 17 13:48:44 crc kubenswrapper[4833]: I0217 13:48:44.243664 4833 patch_prober.go:28] interesting pod/machine-config-daemon-nmzvl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 13:48:44 crc kubenswrapper[4833]: I0217 13:48:44.243748 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" podUID="f4a1ca83-1919-4f9c-82de-c849cbd50e70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 13:48:44 crc kubenswrapper[4833]: I0217 13:48:44.243820 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" Feb 17 13:48:44 crc kubenswrapper[4833]: I0217 13:48:44.244677 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"89a1061eccd396579d2995d2d4ee3bd83f65f7e06a951b4d77897e687a7dcb78"} pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 13:48:44 crc kubenswrapper[4833]: I0217 13:48:44.244779 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" podUID="f4a1ca83-1919-4f9c-82de-c849cbd50e70" containerName="machine-config-daemon" containerID="cri-o://89a1061eccd396579d2995d2d4ee3bd83f65f7e06a951b4d77897e687a7dcb78" gracePeriod=600 Feb 17 13:48:44 crc kubenswrapper[4833]: I0217 13:48:44.994566 4833 generic.go:334] "Generic (PLEG): container finished" podID="f4a1ca83-1919-4f9c-82de-c849cbd50e70" containerID="89a1061eccd396579d2995d2d4ee3bd83f65f7e06a951b4d77897e687a7dcb78" exitCode=0 Feb 17 13:48:44 crc kubenswrapper[4833]: I0217 13:48:44.994673 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" event={"ID":"f4a1ca83-1919-4f9c-82de-c849cbd50e70","Type":"ContainerDied","Data":"89a1061eccd396579d2995d2d4ee3bd83f65f7e06a951b4d77897e687a7dcb78"} Feb 17 13:48:44 crc kubenswrapper[4833]: I0217 13:48:44.995141 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" event={"ID":"f4a1ca83-1919-4f9c-82de-c849cbd50e70","Type":"ContainerStarted","Data":"62b5aa8ef2aa9dc1ba827be3b12bbc4a526f52768d531f209957f02ab5874e23"} Feb 17 13:48:46 crc kubenswrapper[4833]: I0217 13:48:46.274779 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-h2rpb" podUID="234db162-a495-45d1-8af9-7e2deaa2763c" containerName="oauth-openshift" containerID="cri-o://0ef06d9f485d047103a23cd9fca597e8910ce306f9d7e448f56373604ca3023c" gracePeriod=15 Feb 17 13:48:46 crc kubenswrapper[4833]: I0217 13:48:46.802206 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-h2rpb" Feb 17 13:48:46 crc kubenswrapper[4833]: I0217 13:48:46.923731 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/234db162-a495-45d1-8af9-7e2deaa2763c-v4-0-config-system-service-ca\") pod \"234db162-a495-45d1-8af9-7e2deaa2763c\" (UID: \"234db162-a495-45d1-8af9-7e2deaa2763c\") " Feb 17 13:48:46 crc kubenswrapper[4833]: I0217 13:48:46.923790 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/234db162-a495-45d1-8af9-7e2deaa2763c-v4-0-config-system-router-certs\") pod \"234db162-a495-45d1-8af9-7e2deaa2763c\" (UID: \"234db162-a495-45d1-8af9-7e2deaa2763c\") " Feb 17 13:48:46 crc kubenswrapper[4833]: I0217 13:48:46.923847 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/234db162-a495-45d1-8af9-7e2deaa2763c-v4-0-config-user-template-error\") pod \"234db162-a495-45d1-8af9-7e2deaa2763c\" (UID: \"234db162-a495-45d1-8af9-7e2deaa2763c\") " Feb 17 13:48:46 crc kubenswrapper[4833]: I0217 13:48:46.923875 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/234db162-a495-45d1-8af9-7e2deaa2763c-v4-0-config-system-cliconfig\") pod \"234db162-a495-45d1-8af9-7e2deaa2763c\" (UID: \"234db162-a495-45d1-8af9-7e2deaa2763c\") " Feb 17 13:48:46 crc kubenswrapper[4833]: I0217 13:48:46.923903 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/234db162-a495-45d1-8af9-7e2deaa2763c-v4-0-config-system-trusted-ca-bundle\") pod \"234db162-a495-45d1-8af9-7e2deaa2763c\" (UID: \"234db162-a495-45d1-8af9-7e2deaa2763c\") " Feb 17 13:48:46 crc kubenswrapper[4833]: I0217 13:48:46.923929 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/234db162-a495-45d1-8af9-7e2deaa2763c-v4-0-config-system-ocp-branding-template\") pod \"234db162-a495-45d1-8af9-7e2deaa2763c\" (UID: \"234db162-a495-45d1-8af9-7e2deaa2763c\") " Feb 17 13:48:46 crc kubenswrapper[4833]: I0217 13:48:46.923960 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/234db162-a495-45d1-8af9-7e2deaa2763c-audit-dir\") pod \"234db162-a495-45d1-8af9-7e2deaa2763c\" (UID: \"234db162-a495-45d1-8af9-7e2deaa2763c\") " Feb 17 13:48:46 crc kubenswrapper[4833]: I0217 13:48:46.923989 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/234db162-a495-45d1-8af9-7e2deaa2763c-v4-0-config-user-template-login\") pod \"234db162-a495-45d1-8af9-7e2deaa2763c\" (UID: \"234db162-a495-45d1-8af9-7e2deaa2763c\") " Feb 17 13:48:46 crc kubenswrapper[4833]: I0217 13:48:46.924023 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/234db162-a495-45d1-8af9-7e2deaa2763c-v4-0-config-system-serving-cert\") pod \"234db162-a495-45d1-8af9-7e2deaa2763c\" (UID: \"234db162-a495-45d1-8af9-7e2deaa2763c\") " Feb 17 13:48:46 crc kubenswrapper[4833]: I0217 13:48:46.924128 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/234db162-a495-45d1-8af9-7e2deaa2763c-v4-0-config-user-template-provider-selection\") pod \"234db162-a495-45d1-8af9-7e2deaa2763c\" (UID: \"234db162-a495-45d1-8af9-7e2deaa2763c\") " Feb 17 13:48:46 crc kubenswrapper[4833]: I0217 13:48:46.924169 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/234db162-a495-45d1-8af9-7e2deaa2763c-audit-policies\") pod \"234db162-a495-45d1-8af9-7e2deaa2763c\" (UID: \"234db162-a495-45d1-8af9-7e2deaa2763c\") " Feb 17 13:48:46 crc kubenswrapper[4833]: I0217 13:48:46.924199 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvprr\" (UniqueName: \"kubernetes.io/projected/234db162-a495-45d1-8af9-7e2deaa2763c-kube-api-access-pvprr\") pod \"234db162-a495-45d1-8af9-7e2deaa2763c\" (UID: \"234db162-a495-45d1-8af9-7e2deaa2763c\") " Feb 17 13:48:46 crc kubenswrapper[4833]: I0217 13:48:46.924239 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/234db162-a495-45d1-8af9-7e2deaa2763c-v4-0-config-system-session\") pod \"234db162-a495-45d1-8af9-7e2deaa2763c\" (UID: \"234db162-a495-45d1-8af9-7e2deaa2763c\") " Feb 17 13:48:46 crc kubenswrapper[4833]: I0217 13:48:46.924282 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/234db162-a495-45d1-8af9-7e2deaa2763c-v4-0-config-user-idp-0-file-data\") pod \"234db162-a495-45d1-8af9-7e2deaa2763c\" (UID: \"234db162-a495-45d1-8af9-7e2deaa2763c\") " Feb 17 13:48:46 crc kubenswrapper[4833]: I0217 13:48:46.924953 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/234db162-a495-45d1-8af9-7e2deaa2763c-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "234db162-a495-45d1-8af9-7e2deaa2763c" (UID: "234db162-a495-45d1-8af9-7e2deaa2763c"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:48:46 crc kubenswrapper[4833]: I0217 13:48:46.925021 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/234db162-a495-45d1-8af9-7e2deaa2763c-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "234db162-a495-45d1-8af9-7e2deaa2763c" (UID: "234db162-a495-45d1-8af9-7e2deaa2763c"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 13:48:46 crc kubenswrapper[4833]: I0217 13:48:46.925811 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/234db162-a495-45d1-8af9-7e2deaa2763c-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "234db162-a495-45d1-8af9-7e2deaa2763c" (UID: "234db162-a495-45d1-8af9-7e2deaa2763c"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:48:46 crc kubenswrapper[4833]: I0217 13:48:46.926608 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/234db162-a495-45d1-8af9-7e2deaa2763c-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "234db162-a495-45d1-8af9-7e2deaa2763c" (UID: "234db162-a495-45d1-8af9-7e2deaa2763c"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:48:46 crc kubenswrapper[4833]: I0217 13:48:46.927506 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/234db162-a495-45d1-8af9-7e2deaa2763c-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "234db162-a495-45d1-8af9-7e2deaa2763c" (UID: "234db162-a495-45d1-8af9-7e2deaa2763c"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:48:46 crc kubenswrapper[4833]: I0217 13:48:46.930666 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/234db162-a495-45d1-8af9-7e2deaa2763c-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "234db162-a495-45d1-8af9-7e2deaa2763c" (UID: "234db162-a495-45d1-8af9-7e2deaa2763c"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:48:46 crc kubenswrapper[4833]: I0217 13:48:46.931619 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/234db162-a495-45d1-8af9-7e2deaa2763c-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "234db162-a495-45d1-8af9-7e2deaa2763c" (UID: "234db162-a495-45d1-8af9-7e2deaa2763c"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:48:46 crc kubenswrapper[4833]: I0217 13:48:46.934164 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/234db162-a495-45d1-8af9-7e2deaa2763c-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "234db162-a495-45d1-8af9-7e2deaa2763c" (UID: "234db162-a495-45d1-8af9-7e2deaa2763c"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:48:46 crc kubenswrapper[4833]: I0217 13:48:46.934423 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/234db162-a495-45d1-8af9-7e2deaa2763c-kube-api-access-pvprr" (OuterVolumeSpecName: "kube-api-access-pvprr") pod "234db162-a495-45d1-8af9-7e2deaa2763c" (UID: "234db162-a495-45d1-8af9-7e2deaa2763c"). InnerVolumeSpecName "kube-api-access-pvprr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:48:46 crc kubenswrapper[4833]: I0217 13:48:46.937925 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/234db162-a495-45d1-8af9-7e2deaa2763c-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "234db162-a495-45d1-8af9-7e2deaa2763c" (UID: "234db162-a495-45d1-8af9-7e2deaa2763c"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:48:46 crc kubenswrapper[4833]: I0217 13:48:46.939661 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/234db162-a495-45d1-8af9-7e2deaa2763c-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "234db162-a495-45d1-8af9-7e2deaa2763c" (UID: "234db162-a495-45d1-8af9-7e2deaa2763c"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:48:46 crc kubenswrapper[4833]: I0217 13:48:46.939888 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/234db162-a495-45d1-8af9-7e2deaa2763c-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "234db162-a495-45d1-8af9-7e2deaa2763c" (UID: "234db162-a495-45d1-8af9-7e2deaa2763c"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:48:46 crc kubenswrapper[4833]: I0217 13:48:46.940053 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/234db162-a495-45d1-8af9-7e2deaa2763c-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "234db162-a495-45d1-8af9-7e2deaa2763c" (UID: "234db162-a495-45d1-8af9-7e2deaa2763c"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:48:46 crc kubenswrapper[4833]: I0217 13:48:46.940394 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/234db162-a495-45d1-8af9-7e2deaa2763c-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "234db162-a495-45d1-8af9-7e2deaa2763c" (UID: "234db162-a495-45d1-8af9-7e2deaa2763c"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:48:47 crc kubenswrapper[4833]: I0217 13:48:47.017341 4833 generic.go:334] "Generic (PLEG): container finished" podID="234db162-a495-45d1-8af9-7e2deaa2763c" containerID="0ef06d9f485d047103a23cd9fca597e8910ce306f9d7e448f56373604ca3023c" exitCode=0 Feb 17 13:48:47 crc kubenswrapper[4833]: I0217 13:48:47.017433 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-h2rpb" event={"ID":"234db162-a495-45d1-8af9-7e2deaa2763c","Type":"ContainerDied","Data":"0ef06d9f485d047103a23cd9fca597e8910ce306f9d7e448f56373604ca3023c"} Feb 17 13:48:47 crc kubenswrapper[4833]: I0217 13:48:47.017461 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-h2rpb" event={"ID":"234db162-a495-45d1-8af9-7e2deaa2763c","Type":"ContainerDied","Data":"f03d83af7de5eebd63f90d26ee64329f9b501f84488fcf8f1ff61a153f26ac30"} Feb 17 13:48:47 crc kubenswrapper[4833]: I0217 13:48:47.017477 4833 scope.go:117] "RemoveContainer" containerID="0ef06d9f485d047103a23cd9fca597e8910ce306f9d7e448f56373604ca3023c" Feb 17 13:48:47 crc kubenswrapper[4833]: I0217 13:48:47.017594 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-h2rpb" Feb 17 13:48:47 crc kubenswrapper[4833]: I0217 13:48:47.025127 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/234db162-a495-45d1-8af9-7e2deaa2763c-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:47 crc kubenswrapper[4833]: I0217 13:48:47.025149 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/234db162-a495-45d1-8af9-7e2deaa2763c-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:47 crc kubenswrapper[4833]: I0217 13:48:47.025160 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/234db162-a495-45d1-8af9-7e2deaa2763c-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:47 crc kubenswrapper[4833]: I0217 13:48:47.025170 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/234db162-a495-45d1-8af9-7e2deaa2763c-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:47 crc kubenswrapper[4833]: I0217 13:48:47.025179 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/234db162-a495-45d1-8af9-7e2deaa2763c-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:47 crc kubenswrapper[4833]: I0217 13:48:47.025188 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/234db162-a495-45d1-8af9-7e2deaa2763c-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:47 crc kubenswrapper[4833]: I0217 13:48:47.025198 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/234db162-a495-45d1-8af9-7e2deaa2763c-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:47 crc kubenswrapper[4833]: I0217 13:48:47.025207 4833 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/234db162-a495-45d1-8af9-7e2deaa2763c-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:47 crc kubenswrapper[4833]: I0217 13:48:47.025216 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/234db162-a495-45d1-8af9-7e2deaa2763c-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:47 crc kubenswrapper[4833]: I0217 13:48:47.025226 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/234db162-a495-45d1-8af9-7e2deaa2763c-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:47 crc kubenswrapper[4833]: I0217 13:48:47.025236 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/234db162-a495-45d1-8af9-7e2deaa2763c-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:47 crc kubenswrapper[4833]: I0217 13:48:47.025245 4833 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/234db162-a495-45d1-8af9-7e2deaa2763c-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:47 crc kubenswrapper[4833]: I0217 13:48:47.025253 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvprr\" (UniqueName: \"kubernetes.io/projected/234db162-a495-45d1-8af9-7e2deaa2763c-kube-api-access-pvprr\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:47 crc kubenswrapper[4833]: I0217 13:48:47.025262 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/234db162-a495-45d1-8af9-7e2deaa2763c-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:47 crc kubenswrapper[4833]: I0217 13:48:47.050974 4833 scope.go:117] "RemoveContainer" containerID="0ef06d9f485d047103a23cd9fca597e8910ce306f9d7e448f56373604ca3023c" Feb 17 13:48:47 crc kubenswrapper[4833]: E0217 13:48:47.052674 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ef06d9f485d047103a23cd9fca597e8910ce306f9d7e448f56373604ca3023c\": container with ID starting with 0ef06d9f485d047103a23cd9fca597e8910ce306f9d7e448f56373604ca3023c not found: ID does not exist" containerID="0ef06d9f485d047103a23cd9fca597e8910ce306f9d7e448f56373604ca3023c" Feb 17 13:48:47 crc kubenswrapper[4833]: I0217 13:48:47.052742 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ef06d9f485d047103a23cd9fca597e8910ce306f9d7e448f56373604ca3023c"} err="failed to get container status \"0ef06d9f485d047103a23cd9fca597e8910ce306f9d7e448f56373604ca3023c\": rpc error: code = NotFound desc = could not find container \"0ef06d9f485d047103a23cd9fca597e8910ce306f9d7e448f56373604ca3023c\": container with ID starting with 0ef06d9f485d047103a23cd9fca597e8910ce306f9d7e448f56373604ca3023c not found: ID does not exist" Feb 17 13:48:47 crc kubenswrapper[4833]: I0217 13:48:47.059708 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-h2rpb"] Feb 17 13:48:47 crc kubenswrapper[4833]: I0217 13:48:47.061189 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-h2rpb"] Feb 17 13:48:49 crc kubenswrapper[4833]: I0217 13:48:49.051721 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="234db162-a495-45d1-8af9-7e2deaa2763c" path="/var/lib/kubelet/pods/234db162-a495-45d1-8af9-7e2deaa2763c/volumes" Feb 17 13:48:50 crc kubenswrapper[4833]: I0217 13:48:50.080717 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-77c8c5f65c-lvbrt"] Feb 17 13:48:50 crc kubenswrapper[4833]: E0217 13:48:50.080950 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="234db162-a495-45d1-8af9-7e2deaa2763c" containerName="oauth-openshift" Feb 17 13:48:50 crc kubenswrapper[4833]: I0217 13:48:50.080968 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="234db162-a495-45d1-8af9-7e2deaa2763c" containerName="oauth-openshift" Feb 17 13:48:50 crc kubenswrapper[4833]: I0217 13:48:50.081107 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="234db162-a495-45d1-8af9-7e2deaa2763c" containerName="oauth-openshift" Feb 17 13:48:50 crc kubenswrapper[4833]: I0217 13:48:50.081525 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-77c8c5f65c-lvbrt" Feb 17 13:48:50 crc kubenswrapper[4833]: I0217 13:48:50.085870 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 17 13:48:50 crc kubenswrapper[4833]: I0217 13:48:50.086006 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 17 13:48:50 crc kubenswrapper[4833]: I0217 13:48:50.086490 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 17 13:48:50 crc kubenswrapper[4833]: I0217 13:48:50.086587 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 17 13:48:50 crc kubenswrapper[4833]: I0217 13:48:50.086812 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 17 13:48:50 crc kubenswrapper[4833]: I0217 13:48:50.086838 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 17 13:48:50 crc kubenswrapper[4833]: I0217 13:48:50.086854 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 17 13:48:50 crc kubenswrapper[4833]: I0217 13:48:50.086905 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 17 13:48:50 crc kubenswrapper[4833]: I0217 13:48:50.087754 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 17 13:48:50 crc kubenswrapper[4833]: I0217 13:48:50.087931 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 17 13:48:50 crc kubenswrapper[4833]: I0217 13:48:50.088052 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 17 13:48:50 crc kubenswrapper[4833]: I0217 13:48:50.088062 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 17 13:48:50 crc kubenswrapper[4833]: I0217 13:48:50.096633 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 17 13:48:50 crc kubenswrapper[4833]: I0217 13:48:50.106656 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 17 13:48:50 crc kubenswrapper[4833]: I0217 13:48:50.115750 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-77c8c5f65c-lvbrt"] Feb 17 13:48:50 crc kubenswrapper[4833]: I0217 13:48:50.116655 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 17 13:48:50 crc kubenswrapper[4833]: I0217 13:48:50.166228 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwwqb\" (UniqueName: \"kubernetes.io/projected/e7d9ae9a-3b5b-448a-be81-917a2830f9f5-kube-api-access-rwwqb\") pod \"oauth-openshift-77c8c5f65c-lvbrt\" (UID: \"e7d9ae9a-3b5b-448a-be81-917a2830f9f5\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-lvbrt" Feb 17 13:48:50 crc kubenswrapper[4833]: I0217 13:48:50.166272 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e7d9ae9a-3b5b-448a-be81-917a2830f9f5-v4-0-config-system-router-certs\") pod \"oauth-openshift-77c8c5f65c-lvbrt\" (UID: \"e7d9ae9a-3b5b-448a-be81-917a2830f9f5\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-lvbrt" Feb 17 13:48:50 crc kubenswrapper[4833]: I0217 13:48:50.166302 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7d9ae9a-3b5b-448a-be81-917a2830f9f5-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-77c8c5f65c-lvbrt\" (UID: \"e7d9ae9a-3b5b-448a-be81-917a2830f9f5\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-lvbrt" Feb 17 13:48:50 crc kubenswrapper[4833]: I0217 13:48:50.166335 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e7d9ae9a-3b5b-448a-be81-917a2830f9f5-v4-0-config-user-template-login\") pod \"oauth-openshift-77c8c5f65c-lvbrt\" (UID: \"e7d9ae9a-3b5b-448a-be81-917a2830f9f5\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-lvbrt" Feb 17 13:48:50 crc kubenswrapper[4833]: I0217 13:48:50.166358 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e7d9ae9a-3b5b-448a-be81-917a2830f9f5-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-77c8c5f65c-lvbrt\" (UID: \"e7d9ae9a-3b5b-448a-be81-917a2830f9f5\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-lvbrt" Feb 17 13:48:50 crc kubenswrapper[4833]: I0217 13:48:50.166404 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e7d9ae9a-3b5b-448a-be81-917a2830f9f5-v4-0-config-system-serving-cert\") pod \"oauth-openshift-77c8c5f65c-lvbrt\" (UID: \"e7d9ae9a-3b5b-448a-be81-917a2830f9f5\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-lvbrt" Feb 17 13:48:50 crc kubenswrapper[4833]: I0217 13:48:50.166446 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e7d9ae9a-3b5b-448a-be81-917a2830f9f5-v4-0-config-system-session\") pod \"oauth-openshift-77c8c5f65c-lvbrt\" (UID: \"e7d9ae9a-3b5b-448a-be81-917a2830f9f5\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-lvbrt" Feb 17 13:48:50 crc kubenswrapper[4833]: I0217 13:48:50.166500 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e7d9ae9a-3b5b-448a-be81-917a2830f9f5-audit-dir\") pod \"oauth-openshift-77c8c5f65c-lvbrt\" (UID: \"e7d9ae9a-3b5b-448a-be81-917a2830f9f5\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-lvbrt" Feb 17 13:48:50 crc kubenswrapper[4833]: I0217 13:48:50.166523 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e7d9ae9a-3b5b-448a-be81-917a2830f9f5-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-77c8c5f65c-lvbrt\" (UID: \"e7d9ae9a-3b5b-448a-be81-917a2830f9f5\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-lvbrt" Feb 17 13:48:50 crc kubenswrapper[4833]: I0217 13:48:50.166697 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e7d9ae9a-3b5b-448a-be81-917a2830f9f5-v4-0-config-user-template-error\") pod \"oauth-openshift-77c8c5f65c-lvbrt\" (UID: \"e7d9ae9a-3b5b-448a-be81-917a2830f9f5\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-lvbrt" Feb 17 13:48:50 crc kubenswrapper[4833]: I0217 13:48:50.166753 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e7d9ae9a-3b5b-448a-be81-917a2830f9f5-audit-policies\") pod \"oauth-openshift-77c8c5f65c-lvbrt\" (UID: \"e7d9ae9a-3b5b-448a-be81-917a2830f9f5\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-lvbrt" Feb 17 13:48:50 crc kubenswrapper[4833]: I0217 13:48:50.166798 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e7d9ae9a-3b5b-448a-be81-917a2830f9f5-v4-0-config-system-cliconfig\") pod \"oauth-openshift-77c8c5f65c-lvbrt\" (UID: \"e7d9ae9a-3b5b-448a-be81-917a2830f9f5\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-lvbrt" Feb 17 13:48:50 crc kubenswrapper[4833]: I0217 13:48:50.166829 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e7d9ae9a-3b5b-448a-be81-917a2830f9f5-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-77c8c5f65c-lvbrt\" (UID: \"e7d9ae9a-3b5b-448a-be81-917a2830f9f5\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-lvbrt" Feb 17 13:48:50 crc kubenswrapper[4833]: I0217 13:48:50.166858 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e7d9ae9a-3b5b-448a-be81-917a2830f9f5-v4-0-config-system-service-ca\") pod \"oauth-openshift-77c8c5f65c-lvbrt\" (UID: \"e7d9ae9a-3b5b-448a-be81-917a2830f9f5\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-lvbrt" Feb 17 13:48:50 crc kubenswrapper[4833]: I0217 13:48:50.267845 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e7d9ae9a-3b5b-448a-be81-917a2830f9f5-v4-0-config-system-session\") pod \"oauth-openshift-77c8c5f65c-lvbrt\" (UID: \"e7d9ae9a-3b5b-448a-be81-917a2830f9f5\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-lvbrt" Feb 17 13:48:50 crc kubenswrapper[4833]: I0217 13:48:50.267932 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e7d9ae9a-3b5b-448a-be81-917a2830f9f5-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-77c8c5f65c-lvbrt\" (UID: \"e7d9ae9a-3b5b-448a-be81-917a2830f9f5\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-lvbrt" Feb 17 13:48:50 crc kubenswrapper[4833]: I0217 13:48:50.267964 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e7d9ae9a-3b5b-448a-be81-917a2830f9f5-audit-dir\") pod \"oauth-openshift-77c8c5f65c-lvbrt\" (UID: \"e7d9ae9a-3b5b-448a-be81-917a2830f9f5\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-lvbrt" Feb 17 13:48:50 crc kubenswrapper[4833]: I0217 13:48:50.268059 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e7d9ae9a-3b5b-448a-be81-917a2830f9f5-v4-0-config-user-template-error\") pod \"oauth-openshift-77c8c5f65c-lvbrt\" (UID: \"e7d9ae9a-3b5b-448a-be81-917a2830f9f5\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-lvbrt" Feb 17 13:48:50 crc kubenswrapper[4833]: I0217 13:48:50.268099 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e7d9ae9a-3b5b-448a-be81-917a2830f9f5-audit-policies\") pod \"oauth-openshift-77c8c5f65c-lvbrt\" (UID: \"e7d9ae9a-3b5b-448a-be81-917a2830f9f5\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-lvbrt" Feb 17 13:48:50 crc kubenswrapper[4833]: I0217 13:48:50.268132 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e7d9ae9a-3b5b-448a-be81-917a2830f9f5-v4-0-config-system-cliconfig\") pod \"oauth-openshift-77c8c5f65c-lvbrt\" (UID: \"e7d9ae9a-3b5b-448a-be81-917a2830f9f5\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-lvbrt" Feb 17 13:48:50 crc kubenswrapper[4833]: I0217 13:48:50.268164 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e7d9ae9a-3b5b-448a-be81-917a2830f9f5-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-77c8c5f65c-lvbrt\" (UID: \"e7d9ae9a-3b5b-448a-be81-917a2830f9f5\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-lvbrt" Feb 17 13:48:50 crc kubenswrapper[4833]: I0217 13:48:50.268184 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e7d9ae9a-3b5b-448a-be81-917a2830f9f5-audit-dir\") pod \"oauth-openshift-77c8c5f65c-lvbrt\" (UID: \"e7d9ae9a-3b5b-448a-be81-917a2830f9f5\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-lvbrt" Feb 17 13:48:50 crc kubenswrapper[4833]: I0217 13:48:50.268197 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e7d9ae9a-3b5b-448a-be81-917a2830f9f5-v4-0-config-system-service-ca\") pod \"oauth-openshift-77c8c5f65c-lvbrt\" (UID: \"e7d9ae9a-3b5b-448a-be81-917a2830f9f5\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-lvbrt" Feb 17 13:48:50 crc kubenswrapper[4833]: I0217 13:48:50.268456 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwwqb\" (UniqueName: \"kubernetes.io/projected/e7d9ae9a-3b5b-448a-be81-917a2830f9f5-kube-api-access-rwwqb\") pod \"oauth-openshift-77c8c5f65c-lvbrt\" (UID: \"e7d9ae9a-3b5b-448a-be81-917a2830f9f5\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-lvbrt" Feb 17 13:48:50 crc kubenswrapper[4833]: I0217 13:48:50.268519 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e7d9ae9a-3b5b-448a-be81-917a2830f9f5-v4-0-config-system-router-certs\") pod \"oauth-openshift-77c8c5f65c-lvbrt\" (UID: \"e7d9ae9a-3b5b-448a-be81-917a2830f9f5\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-lvbrt" Feb 17 13:48:50 crc kubenswrapper[4833]: I0217 13:48:50.268578 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7d9ae9a-3b5b-448a-be81-917a2830f9f5-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-77c8c5f65c-lvbrt\" (UID: \"e7d9ae9a-3b5b-448a-be81-917a2830f9f5\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-lvbrt" Feb 17 13:48:50 crc kubenswrapper[4833]: I0217 13:48:50.268665 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e7d9ae9a-3b5b-448a-be81-917a2830f9f5-v4-0-config-user-template-login\") pod \"oauth-openshift-77c8c5f65c-lvbrt\" (UID: \"e7d9ae9a-3b5b-448a-be81-917a2830f9f5\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-lvbrt" Feb 17 13:48:50 crc kubenswrapper[4833]: I0217 13:48:50.268705 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e7d9ae9a-3b5b-448a-be81-917a2830f9f5-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-77c8c5f65c-lvbrt\" (UID: \"e7d9ae9a-3b5b-448a-be81-917a2830f9f5\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-lvbrt" Feb 17 13:48:50 crc kubenswrapper[4833]: I0217 13:48:50.268748 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e7d9ae9a-3b5b-448a-be81-917a2830f9f5-v4-0-config-system-serving-cert\") pod \"oauth-openshift-77c8c5f65c-lvbrt\" (UID: \"e7d9ae9a-3b5b-448a-be81-917a2830f9f5\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-lvbrt" Feb 17 13:48:50 crc kubenswrapper[4833]: I0217 13:48:50.269213 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e7d9ae9a-3b5b-448a-be81-917a2830f9f5-v4-0-config-system-cliconfig\") pod \"oauth-openshift-77c8c5f65c-lvbrt\" (UID: \"e7d9ae9a-3b5b-448a-be81-917a2830f9f5\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-lvbrt" Feb 17 13:48:50 crc kubenswrapper[4833]: I0217 13:48:50.270310 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e7d9ae9a-3b5b-448a-be81-917a2830f9f5-v4-0-config-system-service-ca\") pod \"oauth-openshift-77c8c5f65c-lvbrt\" (UID: \"e7d9ae9a-3b5b-448a-be81-917a2830f9f5\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-lvbrt" Feb 17 13:48:50 crc kubenswrapper[4833]: I0217 13:48:50.270516 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e7d9ae9a-3b5b-448a-be81-917a2830f9f5-audit-policies\") pod \"oauth-openshift-77c8c5f65c-lvbrt\" (UID: \"e7d9ae9a-3b5b-448a-be81-917a2830f9f5\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-lvbrt" Feb 17 13:48:50 crc kubenswrapper[4833]: I0217 13:48:50.271540 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7d9ae9a-3b5b-448a-be81-917a2830f9f5-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-77c8c5f65c-lvbrt\" (UID: \"e7d9ae9a-3b5b-448a-be81-917a2830f9f5\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-lvbrt" Feb 17 13:48:50 crc kubenswrapper[4833]: I0217 13:48:50.276541 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e7d9ae9a-3b5b-448a-be81-917a2830f9f5-v4-0-config-user-template-error\") pod \"oauth-openshift-77c8c5f65c-lvbrt\" (UID: \"e7d9ae9a-3b5b-448a-be81-917a2830f9f5\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-lvbrt" Feb 17 13:48:50 crc kubenswrapper[4833]: I0217 13:48:50.276860 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e7d9ae9a-3b5b-448a-be81-917a2830f9f5-v4-0-config-system-session\") pod \"oauth-openshift-77c8c5f65c-lvbrt\" (UID: \"e7d9ae9a-3b5b-448a-be81-917a2830f9f5\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-lvbrt" Feb 17 13:48:50 crc kubenswrapper[4833]: I0217 13:48:50.276911 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e7d9ae9a-3b5b-448a-be81-917a2830f9f5-v4-0-config-system-router-certs\") pod \"oauth-openshift-77c8c5f65c-lvbrt\" (UID: \"e7d9ae9a-3b5b-448a-be81-917a2830f9f5\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-lvbrt" Feb 17 13:48:50 crc kubenswrapper[4833]: I0217 13:48:50.279061 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e7d9ae9a-3b5b-448a-be81-917a2830f9f5-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-77c8c5f65c-lvbrt\" (UID: \"e7d9ae9a-3b5b-448a-be81-917a2830f9f5\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-lvbrt" Feb 17 13:48:50 crc kubenswrapper[4833]: I0217 13:48:50.279461 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e7d9ae9a-3b5b-448a-be81-917a2830f9f5-v4-0-config-system-serving-cert\") pod \"oauth-openshift-77c8c5f65c-lvbrt\" (UID: \"e7d9ae9a-3b5b-448a-be81-917a2830f9f5\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-lvbrt" Feb 17 13:48:50 crc kubenswrapper[4833]: I0217 13:48:50.284114 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e7d9ae9a-3b5b-448a-be81-917a2830f9f5-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-77c8c5f65c-lvbrt\" (UID: \"e7d9ae9a-3b5b-448a-be81-917a2830f9f5\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-lvbrt" Feb 17 13:48:50 crc kubenswrapper[4833]: I0217 13:48:50.288586 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e7d9ae9a-3b5b-448a-be81-917a2830f9f5-v4-0-config-user-template-login\") pod \"oauth-openshift-77c8c5f65c-lvbrt\" (UID: \"e7d9ae9a-3b5b-448a-be81-917a2830f9f5\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-lvbrt" Feb 17 13:48:50 crc kubenswrapper[4833]: I0217 13:48:50.295908 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwwqb\" (UniqueName: \"kubernetes.io/projected/e7d9ae9a-3b5b-448a-be81-917a2830f9f5-kube-api-access-rwwqb\") pod \"oauth-openshift-77c8c5f65c-lvbrt\" (UID: \"e7d9ae9a-3b5b-448a-be81-917a2830f9f5\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-lvbrt" Feb 17 13:48:50 crc kubenswrapper[4833]: I0217 13:48:50.299028 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e7d9ae9a-3b5b-448a-be81-917a2830f9f5-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-77c8c5f65c-lvbrt\" (UID: \"e7d9ae9a-3b5b-448a-be81-917a2830f9f5\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-lvbrt" Feb 17 13:48:50 crc kubenswrapper[4833]: I0217 13:48:50.412355 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-77c8c5f65c-lvbrt" Feb 17 13:48:50 crc kubenswrapper[4833]: I0217 13:48:50.903876 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-77c8c5f65c-lvbrt"] Feb 17 13:48:51 crc kubenswrapper[4833]: I0217 13:48:51.048232 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-77c8c5f65c-lvbrt" event={"ID":"e7d9ae9a-3b5b-448a-be81-917a2830f9f5","Type":"ContainerStarted","Data":"dabc4c5d2d27de9f600a50570776393d079e6206d20e97122524cd7078a3feb4"} Feb 17 13:48:52 crc kubenswrapper[4833]: I0217 13:48:52.048818 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-77c8c5f65c-lvbrt" event={"ID":"e7d9ae9a-3b5b-448a-be81-917a2830f9f5","Type":"ContainerStarted","Data":"fdab4c4bf80fe135b30d43c423753468eb3fb46ae24a9ed3a0f60c711ad8f92f"} Feb 17 13:48:52 crc kubenswrapper[4833]: I0217 13:48:52.049534 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-77c8c5f65c-lvbrt" Feb 17 13:48:52 crc kubenswrapper[4833]: I0217 13:48:52.058943 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-77c8c5f65c-lvbrt" Feb 17 13:48:52 crc kubenswrapper[4833]: I0217 13:48:52.079868 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-77c8c5f65c-lvbrt" podStartSLOduration=31.079849735 podStartE2EDuration="31.079849735s" podCreationTimestamp="2026-02-17 13:48:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:48:52.077992777 +0000 UTC m=+221.713092230" watchObservedRunningTime="2026-02-17 13:48:52.079849735 +0000 UTC m=+221.714949168" Feb 17 13:48:58 crc kubenswrapper[4833]: I0217 13:48:58.690963 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-54c9786465-52n82"] Feb 17 13:48:58 crc kubenswrapper[4833]: I0217 13:48:58.691763 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-54c9786465-52n82" podUID="c1f6924f-3afd-49b8-80b8-d3df9c59ca19" containerName="controller-manager" containerID="cri-o://e0cb27adefcedaf5df509bde426d540be97a7a6d464b1eb046e2c1b1e0a0308d" gracePeriod=30 Feb 17 13:48:58 crc kubenswrapper[4833]: I0217 13:48:58.785516 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b59fdff6d-5m6zf"] Feb 17 13:48:58 crc kubenswrapper[4833]: I0217 13:48:58.786021 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7b59fdff6d-5m6zf" podUID="9117597c-5f66-4616-a802-762135377474" containerName="route-controller-manager" containerID="cri-o://3cbf6e0cff5bda1df20b3621fc8ceb215bbe0f8e6c71afd15ec1a399fd421daa" gracePeriod=30 Feb 17 13:48:59 crc kubenswrapper[4833]: I0217 13:48:59.089930 4833 generic.go:334] "Generic (PLEG): container finished" podID="9117597c-5f66-4616-a802-762135377474" containerID="3cbf6e0cff5bda1df20b3621fc8ceb215bbe0f8e6c71afd15ec1a399fd421daa" exitCode=0 Feb 17 13:48:59 crc kubenswrapper[4833]: I0217 13:48:59.090026 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b59fdff6d-5m6zf" event={"ID":"9117597c-5f66-4616-a802-762135377474","Type":"ContainerDied","Data":"3cbf6e0cff5bda1df20b3621fc8ceb215bbe0f8e6c71afd15ec1a399fd421daa"} Feb 17 13:48:59 crc kubenswrapper[4833]: I0217 13:48:59.091849 4833 generic.go:334] "Generic (PLEG): container finished" podID="c1f6924f-3afd-49b8-80b8-d3df9c59ca19" containerID="e0cb27adefcedaf5df509bde426d540be97a7a6d464b1eb046e2c1b1e0a0308d" exitCode=0 Feb 17 13:48:59 crc kubenswrapper[4833]: I0217 13:48:59.091880 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54c9786465-52n82" event={"ID":"c1f6924f-3afd-49b8-80b8-d3df9c59ca19","Type":"ContainerDied","Data":"e0cb27adefcedaf5df509bde426d540be97a7a6d464b1eb046e2c1b1e0a0308d"} Feb 17 13:48:59 crc kubenswrapper[4833]: I0217 13:48:59.254162 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b59fdff6d-5m6zf" Feb 17 13:48:59 crc kubenswrapper[4833]: I0217 13:48:59.270780 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-54c9786465-52n82" Feb 17 13:48:59 crc kubenswrapper[4833]: I0217 13:48:59.417367 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c1f6924f-3afd-49b8-80b8-d3df9c59ca19-proxy-ca-bundles\") pod \"c1f6924f-3afd-49b8-80b8-d3df9c59ca19\" (UID: \"c1f6924f-3afd-49b8-80b8-d3df9c59ca19\") " Feb 17 13:48:59 crc kubenswrapper[4833]: I0217 13:48:59.417428 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbjcc\" (UniqueName: \"kubernetes.io/projected/9117597c-5f66-4616-a802-762135377474-kube-api-access-cbjcc\") pod \"9117597c-5f66-4616-a802-762135377474\" (UID: \"9117597c-5f66-4616-a802-762135377474\") " Feb 17 13:48:59 crc kubenswrapper[4833]: I0217 13:48:59.417467 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c1f6924f-3afd-49b8-80b8-d3df9c59ca19-client-ca\") pod \"c1f6924f-3afd-49b8-80b8-d3df9c59ca19\" (UID: \"c1f6924f-3afd-49b8-80b8-d3df9c59ca19\") " Feb 17 13:48:59 crc kubenswrapper[4833]: I0217 13:48:59.417490 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9117597c-5f66-4616-a802-762135377474-client-ca\") pod \"9117597c-5f66-4616-a802-762135377474\" (UID: \"9117597c-5f66-4616-a802-762135377474\") " Feb 17 13:48:59 crc kubenswrapper[4833]: I0217 13:48:59.417508 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1f6924f-3afd-49b8-80b8-d3df9c59ca19-serving-cert\") pod \"c1f6924f-3afd-49b8-80b8-d3df9c59ca19\" (UID: \"c1f6924f-3afd-49b8-80b8-d3df9c59ca19\") " Feb 17 13:48:59 crc kubenswrapper[4833]: I0217 13:48:59.417542 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k59df\" (UniqueName: \"kubernetes.io/projected/c1f6924f-3afd-49b8-80b8-d3df9c59ca19-kube-api-access-k59df\") pod \"c1f6924f-3afd-49b8-80b8-d3df9c59ca19\" (UID: \"c1f6924f-3afd-49b8-80b8-d3df9c59ca19\") " Feb 17 13:48:59 crc kubenswrapper[4833]: I0217 13:48:59.417568 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9117597c-5f66-4616-a802-762135377474-serving-cert\") pod \"9117597c-5f66-4616-a802-762135377474\" (UID: \"9117597c-5f66-4616-a802-762135377474\") " Feb 17 13:48:59 crc kubenswrapper[4833]: I0217 13:48:59.417587 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1f6924f-3afd-49b8-80b8-d3df9c59ca19-config\") pod \"c1f6924f-3afd-49b8-80b8-d3df9c59ca19\" (UID: \"c1f6924f-3afd-49b8-80b8-d3df9c59ca19\") " Feb 17 13:48:59 crc kubenswrapper[4833]: I0217 13:48:59.417632 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9117597c-5f66-4616-a802-762135377474-config\") pod \"9117597c-5f66-4616-a802-762135377474\" (UID: \"9117597c-5f66-4616-a802-762135377474\") " Feb 17 13:48:59 crc kubenswrapper[4833]: I0217 13:48:59.418401 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9117597c-5f66-4616-a802-762135377474-client-ca" (OuterVolumeSpecName: "client-ca") pod "9117597c-5f66-4616-a802-762135377474" (UID: "9117597c-5f66-4616-a802-762135377474"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:48:59 crc kubenswrapper[4833]: I0217 13:48:59.418616 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1f6924f-3afd-49b8-80b8-d3df9c59ca19-client-ca" (OuterVolumeSpecName: "client-ca") pod "c1f6924f-3afd-49b8-80b8-d3df9c59ca19" (UID: "c1f6924f-3afd-49b8-80b8-d3df9c59ca19"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:48:59 crc kubenswrapper[4833]: I0217 13:48:59.418804 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1f6924f-3afd-49b8-80b8-d3df9c59ca19-config" (OuterVolumeSpecName: "config") pod "c1f6924f-3afd-49b8-80b8-d3df9c59ca19" (UID: "c1f6924f-3afd-49b8-80b8-d3df9c59ca19"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:48:59 crc kubenswrapper[4833]: I0217 13:48:59.418948 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9117597c-5f66-4616-a802-762135377474-config" (OuterVolumeSpecName: "config") pod "9117597c-5f66-4616-a802-762135377474" (UID: "9117597c-5f66-4616-a802-762135377474"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:48:59 crc kubenswrapper[4833]: I0217 13:48:59.419300 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9117597c-5f66-4616-a802-762135377474-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:59 crc kubenswrapper[4833]: I0217 13:48:59.419329 4833 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c1f6924f-3afd-49b8-80b8-d3df9c59ca19-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:59 crc kubenswrapper[4833]: I0217 13:48:59.419349 4833 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9117597c-5f66-4616-a802-762135377474-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:59 crc kubenswrapper[4833]: I0217 13:48:59.419369 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1f6924f-3afd-49b8-80b8-d3df9c59ca19-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:59 crc kubenswrapper[4833]: I0217 13:48:59.419551 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1f6924f-3afd-49b8-80b8-d3df9c59ca19-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "c1f6924f-3afd-49b8-80b8-d3df9c59ca19" (UID: "c1f6924f-3afd-49b8-80b8-d3df9c59ca19"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:48:59 crc kubenswrapper[4833]: I0217 13:48:59.422900 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1f6924f-3afd-49b8-80b8-d3df9c59ca19-kube-api-access-k59df" (OuterVolumeSpecName: "kube-api-access-k59df") pod "c1f6924f-3afd-49b8-80b8-d3df9c59ca19" (UID: "c1f6924f-3afd-49b8-80b8-d3df9c59ca19"). InnerVolumeSpecName "kube-api-access-k59df". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:48:59 crc kubenswrapper[4833]: I0217 13:48:59.423127 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1f6924f-3afd-49b8-80b8-d3df9c59ca19-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c1f6924f-3afd-49b8-80b8-d3df9c59ca19" (UID: "c1f6924f-3afd-49b8-80b8-d3df9c59ca19"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:48:59 crc kubenswrapper[4833]: I0217 13:48:59.423270 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9117597c-5f66-4616-a802-762135377474-kube-api-access-cbjcc" (OuterVolumeSpecName: "kube-api-access-cbjcc") pod "9117597c-5f66-4616-a802-762135377474" (UID: "9117597c-5f66-4616-a802-762135377474"). InnerVolumeSpecName "kube-api-access-cbjcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:48:59 crc kubenswrapper[4833]: I0217 13:48:59.424184 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9117597c-5f66-4616-a802-762135377474-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9117597c-5f66-4616-a802-762135377474" (UID: "9117597c-5f66-4616-a802-762135377474"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:48:59 crc kubenswrapper[4833]: I0217 13:48:59.520568 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k59df\" (UniqueName: \"kubernetes.io/projected/c1f6924f-3afd-49b8-80b8-d3df9c59ca19-kube-api-access-k59df\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:59 crc kubenswrapper[4833]: I0217 13:48:59.520626 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9117597c-5f66-4616-a802-762135377474-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:59 crc kubenswrapper[4833]: I0217 13:48:59.520647 4833 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c1f6924f-3afd-49b8-80b8-d3df9c59ca19-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:59 crc kubenswrapper[4833]: I0217 13:48:59.520664 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbjcc\" (UniqueName: \"kubernetes.io/projected/9117597c-5f66-4616-a802-762135377474-kube-api-access-cbjcc\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:59 crc kubenswrapper[4833]: I0217 13:48:59.520681 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1f6924f-3afd-49b8-80b8-d3df9c59ca19-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:49:00 crc kubenswrapper[4833]: I0217 13:49:00.092682 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-56dc9575cd-587k9"] Feb 17 13:49:00 crc kubenswrapper[4833]: E0217 13:49:00.093123 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1f6924f-3afd-49b8-80b8-d3df9c59ca19" containerName="controller-manager" Feb 17 13:49:00 crc kubenswrapper[4833]: I0217 13:49:00.093154 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1f6924f-3afd-49b8-80b8-d3df9c59ca19" containerName="controller-manager" Feb 17 13:49:00 crc kubenswrapper[4833]: E0217 13:49:00.093201 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9117597c-5f66-4616-a802-762135377474" containerName="route-controller-manager" Feb 17 13:49:00 crc kubenswrapper[4833]: I0217 13:49:00.093221 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="9117597c-5f66-4616-a802-762135377474" containerName="route-controller-manager" Feb 17 13:49:00 crc kubenswrapper[4833]: I0217 13:49:00.093475 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1f6924f-3afd-49b8-80b8-d3df9c59ca19" containerName="controller-manager" Feb 17 13:49:00 crc kubenswrapper[4833]: I0217 13:49:00.093505 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="9117597c-5f66-4616-a802-762135377474" containerName="route-controller-manager" Feb 17 13:49:00 crc kubenswrapper[4833]: I0217 13:49:00.094237 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56dc9575cd-587k9" Feb 17 13:49:00 crc kubenswrapper[4833]: I0217 13:49:00.100357 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6ccff8795d-2nlbt"] Feb 17 13:49:00 crc kubenswrapper[4833]: I0217 13:49:00.101807 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6ccff8795d-2nlbt" Feb 17 13:49:00 crc kubenswrapper[4833]: I0217 13:49:00.104512 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b59fdff6d-5m6zf" event={"ID":"9117597c-5f66-4616-a802-762135377474","Type":"ContainerDied","Data":"1dd6e95ff4afb62aff94976a14e96c26ead66b5e1958022adb8af61305934850"} Feb 17 13:49:00 crc kubenswrapper[4833]: I0217 13:49:00.104596 4833 scope.go:117] "RemoveContainer" containerID="3cbf6e0cff5bda1df20b3621fc8ceb215bbe0f8e6c71afd15ec1a399fd421daa" Feb 17 13:49:00 crc kubenswrapper[4833]: I0217 13:49:00.104540 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b59fdff6d-5m6zf" Feb 17 13:49:00 crc kubenswrapper[4833]: I0217 13:49:00.107695 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54c9786465-52n82" event={"ID":"c1f6924f-3afd-49b8-80b8-d3df9c59ca19","Type":"ContainerDied","Data":"ae4f1e1687ed5b357d60c9231d0dacece5430c8c55344997fc2c59b134f91e6e"} Feb 17 13:49:00 crc kubenswrapper[4833]: I0217 13:49:00.107774 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-54c9786465-52n82" Feb 17 13:49:00 crc kubenswrapper[4833]: I0217 13:49:00.135805 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6ccff8795d-2nlbt"] Feb 17 13:49:00 crc kubenswrapper[4833]: I0217 13:49:00.141354 4833 scope.go:117] "RemoveContainer" containerID="e0cb27adefcedaf5df509bde426d540be97a7a6d464b1eb046e2c1b1e0a0308d" Feb 17 13:49:00 crc kubenswrapper[4833]: I0217 13:49:00.178290 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-56dc9575cd-587k9"] Feb 17 13:49:00 crc kubenswrapper[4833]: I0217 13:49:00.186962 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b59fdff6d-5m6zf"] Feb 17 13:49:00 crc kubenswrapper[4833]: I0217 13:49:00.191530 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b59fdff6d-5m6zf"] Feb 17 13:49:00 crc kubenswrapper[4833]: I0217 13:49:00.195407 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-54c9786465-52n82"] Feb 17 13:49:00 crc kubenswrapper[4833]: I0217 13:49:00.198569 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-54c9786465-52n82"] Feb 17 13:49:00 crc kubenswrapper[4833]: I0217 13:49:00.229850 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2456e646-8fbd-4eb4-903a-30ad35d77190-config\") pod \"controller-manager-56dc9575cd-587k9\" (UID: \"2456e646-8fbd-4eb4-903a-30ad35d77190\") " pod="openshift-controller-manager/controller-manager-56dc9575cd-587k9" Feb 17 13:49:00 crc kubenswrapper[4833]: I0217 13:49:00.229933 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2456e646-8fbd-4eb4-903a-30ad35d77190-serving-cert\") pod \"controller-manager-56dc9575cd-587k9\" (UID: \"2456e646-8fbd-4eb4-903a-30ad35d77190\") " pod="openshift-controller-manager/controller-manager-56dc9575cd-587k9" Feb 17 13:49:00 crc kubenswrapper[4833]: I0217 13:49:00.230116 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/93f4d6dd-4f21-458d-8e36-af26772778e9-client-ca\") pod \"route-controller-manager-6ccff8795d-2nlbt\" (UID: \"93f4d6dd-4f21-458d-8e36-af26772778e9\") " pod="openshift-route-controller-manager/route-controller-manager-6ccff8795d-2nlbt" Feb 17 13:49:00 crc kubenswrapper[4833]: I0217 13:49:00.230200 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93f4d6dd-4f21-458d-8e36-af26772778e9-config\") pod \"route-controller-manager-6ccff8795d-2nlbt\" (UID: \"93f4d6dd-4f21-458d-8e36-af26772778e9\") " pod="openshift-route-controller-manager/route-controller-manager-6ccff8795d-2nlbt" Feb 17 13:49:00 crc kubenswrapper[4833]: I0217 13:49:00.230254 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2456e646-8fbd-4eb4-903a-30ad35d77190-client-ca\") pod \"controller-manager-56dc9575cd-587k9\" (UID: \"2456e646-8fbd-4eb4-903a-30ad35d77190\") " pod="openshift-controller-manager/controller-manager-56dc9575cd-587k9" Feb 17 13:49:00 crc kubenswrapper[4833]: I0217 13:49:00.230314 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ks9g\" (UniqueName: \"kubernetes.io/projected/93f4d6dd-4f21-458d-8e36-af26772778e9-kube-api-access-8ks9g\") pod \"route-controller-manager-6ccff8795d-2nlbt\" (UID: \"93f4d6dd-4f21-458d-8e36-af26772778e9\") " pod="openshift-route-controller-manager/route-controller-manager-6ccff8795d-2nlbt" Feb 17 13:49:00 crc kubenswrapper[4833]: I0217 13:49:00.230460 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2456e646-8fbd-4eb4-903a-30ad35d77190-proxy-ca-bundles\") pod \"controller-manager-56dc9575cd-587k9\" (UID: \"2456e646-8fbd-4eb4-903a-30ad35d77190\") " pod="openshift-controller-manager/controller-manager-56dc9575cd-587k9" Feb 17 13:49:00 crc kubenswrapper[4833]: I0217 13:49:00.230531 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93f4d6dd-4f21-458d-8e36-af26772778e9-serving-cert\") pod \"route-controller-manager-6ccff8795d-2nlbt\" (UID: \"93f4d6dd-4f21-458d-8e36-af26772778e9\") " pod="openshift-route-controller-manager/route-controller-manager-6ccff8795d-2nlbt" Feb 17 13:49:00 crc kubenswrapper[4833]: I0217 13:49:00.230656 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rqpt\" (UniqueName: \"kubernetes.io/projected/2456e646-8fbd-4eb4-903a-30ad35d77190-kube-api-access-2rqpt\") pod \"controller-manager-56dc9575cd-587k9\" (UID: \"2456e646-8fbd-4eb4-903a-30ad35d77190\") " pod="openshift-controller-manager/controller-manager-56dc9575cd-587k9" Feb 17 13:49:00 crc kubenswrapper[4833]: I0217 13:49:00.332620 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2456e646-8fbd-4eb4-903a-30ad35d77190-proxy-ca-bundles\") pod \"controller-manager-56dc9575cd-587k9\" (UID: \"2456e646-8fbd-4eb4-903a-30ad35d77190\") " pod="openshift-controller-manager/controller-manager-56dc9575cd-587k9" Feb 17 13:49:00 crc kubenswrapper[4833]: I0217 13:49:00.332808 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93f4d6dd-4f21-458d-8e36-af26772778e9-serving-cert\") pod \"route-controller-manager-6ccff8795d-2nlbt\" (UID: \"93f4d6dd-4f21-458d-8e36-af26772778e9\") " pod="openshift-route-controller-manager/route-controller-manager-6ccff8795d-2nlbt" Feb 17 13:49:00 crc kubenswrapper[4833]: I0217 13:49:00.332905 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rqpt\" (UniqueName: \"kubernetes.io/projected/2456e646-8fbd-4eb4-903a-30ad35d77190-kube-api-access-2rqpt\") pod \"controller-manager-56dc9575cd-587k9\" (UID: \"2456e646-8fbd-4eb4-903a-30ad35d77190\") " pod="openshift-controller-manager/controller-manager-56dc9575cd-587k9" Feb 17 13:49:00 crc kubenswrapper[4833]: I0217 13:49:00.333023 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2456e646-8fbd-4eb4-903a-30ad35d77190-config\") pod \"controller-manager-56dc9575cd-587k9\" (UID: \"2456e646-8fbd-4eb4-903a-30ad35d77190\") " pod="openshift-controller-manager/controller-manager-56dc9575cd-587k9" Feb 17 13:49:00 crc kubenswrapper[4833]: I0217 13:49:00.333125 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2456e646-8fbd-4eb4-903a-30ad35d77190-serving-cert\") pod \"controller-manager-56dc9575cd-587k9\" (UID: \"2456e646-8fbd-4eb4-903a-30ad35d77190\") " pod="openshift-controller-manager/controller-manager-56dc9575cd-587k9" Feb 17 13:49:00 crc kubenswrapper[4833]: I0217 13:49:00.333192 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/93f4d6dd-4f21-458d-8e36-af26772778e9-client-ca\") pod \"route-controller-manager-6ccff8795d-2nlbt\" (UID: \"93f4d6dd-4f21-458d-8e36-af26772778e9\") " pod="openshift-route-controller-manager/route-controller-manager-6ccff8795d-2nlbt" Feb 17 13:49:00 crc kubenswrapper[4833]: I0217 13:49:00.333247 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93f4d6dd-4f21-458d-8e36-af26772778e9-config\") pod \"route-controller-manager-6ccff8795d-2nlbt\" (UID: \"93f4d6dd-4f21-458d-8e36-af26772778e9\") " pod="openshift-route-controller-manager/route-controller-manager-6ccff8795d-2nlbt" Feb 17 13:49:00 crc kubenswrapper[4833]: I0217 13:49:00.333300 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2456e646-8fbd-4eb4-903a-30ad35d77190-client-ca\") pod \"controller-manager-56dc9575cd-587k9\" (UID: \"2456e646-8fbd-4eb4-903a-30ad35d77190\") " pod="openshift-controller-manager/controller-manager-56dc9575cd-587k9" Feb 17 13:49:00 crc kubenswrapper[4833]: I0217 13:49:00.333365 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ks9g\" (UniqueName: \"kubernetes.io/projected/93f4d6dd-4f21-458d-8e36-af26772778e9-kube-api-access-8ks9g\") pod \"route-controller-manager-6ccff8795d-2nlbt\" (UID: \"93f4d6dd-4f21-458d-8e36-af26772778e9\") " pod="openshift-route-controller-manager/route-controller-manager-6ccff8795d-2nlbt" Feb 17 13:49:00 crc kubenswrapper[4833]: I0217 13:49:00.334904 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2456e646-8fbd-4eb4-903a-30ad35d77190-proxy-ca-bundles\") pod \"controller-manager-56dc9575cd-587k9\" (UID: \"2456e646-8fbd-4eb4-903a-30ad35d77190\") " pod="openshift-controller-manager/controller-manager-56dc9575cd-587k9" Feb 17 13:49:00 crc kubenswrapper[4833]: I0217 13:49:00.335710 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/93f4d6dd-4f21-458d-8e36-af26772778e9-client-ca\") pod \"route-controller-manager-6ccff8795d-2nlbt\" (UID: \"93f4d6dd-4f21-458d-8e36-af26772778e9\") " pod="openshift-route-controller-manager/route-controller-manager-6ccff8795d-2nlbt" Feb 17 13:49:00 crc kubenswrapper[4833]: I0217 13:49:00.336203 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2456e646-8fbd-4eb4-903a-30ad35d77190-client-ca\") pod \"controller-manager-56dc9575cd-587k9\" (UID: \"2456e646-8fbd-4eb4-903a-30ad35d77190\") " pod="openshift-controller-manager/controller-manager-56dc9575cd-587k9" Feb 17 13:49:00 crc kubenswrapper[4833]: I0217 13:49:00.336858 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93f4d6dd-4f21-458d-8e36-af26772778e9-config\") pod \"route-controller-manager-6ccff8795d-2nlbt\" (UID: \"93f4d6dd-4f21-458d-8e36-af26772778e9\") " pod="openshift-route-controller-manager/route-controller-manager-6ccff8795d-2nlbt" Feb 17 13:49:00 crc kubenswrapper[4833]: I0217 13:49:00.336955 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2456e646-8fbd-4eb4-903a-30ad35d77190-config\") pod \"controller-manager-56dc9575cd-587k9\" (UID: \"2456e646-8fbd-4eb4-903a-30ad35d77190\") " pod="openshift-controller-manager/controller-manager-56dc9575cd-587k9" Feb 17 13:49:00 crc kubenswrapper[4833]: I0217 13:49:00.342013 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93f4d6dd-4f21-458d-8e36-af26772778e9-serving-cert\") pod \"route-controller-manager-6ccff8795d-2nlbt\" (UID: \"93f4d6dd-4f21-458d-8e36-af26772778e9\") " pod="openshift-route-controller-manager/route-controller-manager-6ccff8795d-2nlbt" Feb 17 13:49:00 crc kubenswrapper[4833]: I0217 13:49:00.342983 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2456e646-8fbd-4eb4-903a-30ad35d77190-serving-cert\") pod \"controller-manager-56dc9575cd-587k9\" (UID: \"2456e646-8fbd-4eb4-903a-30ad35d77190\") " pod="openshift-controller-manager/controller-manager-56dc9575cd-587k9" Feb 17 13:49:00 crc kubenswrapper[4833]: I0217 13:49:00.357591 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rqpt\" (UniqueName: \"kubernetes.io/projected/2456e646-8fbd-4eb4-903a-30ad35d77190-kube-api-access-2rqpt\") pod \"controller-manager-56dc9575cd-587k9\" (UID: \"2456e646-8fbd-4eb4-903a-30ad35d77190\") " pod="openshift-controller-manager/controller-manager-56dc9575cd-587k9" Feb 17 13:49:00 crc kubenswrapper[4833]: I0217 13:49:00.364144 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ks9g\" (UniqueName: \"kubernetes.io/projected/93f4d6dd-4f21-458d-8e36-af26772778e9-kube-api-access-8ks9g\") pod \"route-controller-manager-6ccff8795d-2nlbt\" (UID: \"93f4d6dd-4f21-458d-8e36-af26772778e9\") " pod="openshift-route-controller-manager/route-controller-manager-6ccff8795d-2nlbt" Feb 17 13:49:00 crc kubenswrapper[4833]: I0217 13:49:00.453163 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56dc9575cd-587k9" Feb 17 13:49:00 crc kubenswrapper[4833]: I0217 13:49:00.471641 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6ccff8795d-2nlbt" Feb 17 13:49:00 crc kubenswrapper[4833]: I0217 13:49:00.686717 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-56dc9575cd-587k9"] Feb 17 13:49:00 crc kubenswrapper[4833]: I0217 13:49:00.939340 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6ccff8795d-2nlbt"] Feb 17 13:49:01 crc kubenswrapper[4833]: I0217 13:49:01.049974 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9117597c-5f66-4616-a802-762135377474" path="/var/lib/kubelet/pods/9117597c-5f66-4616-a802-762135377474/volumes" Feb 17 13:49:01 crc kubenswrapper[4833]: I0217 13:49:01.050667 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1f6924f-3afd-49b8-80b8-d3df9c59ca19" path="/var/lib/kubelet/pods/c1f6924f-3afd-49b8-80b8-d3df9c59ca19/volumes" Feb 17 13:49:01 crc kubenswrapper[4833]: I0217 13:49:01.113687 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56dc9575cd-587k9" event={"ID":"2456e646-8fbd-4eb4-903a-30ad35d77190","Type":"ContainerStarted","Data":"30e81c9deb10f75916a31b3ab2336b40dfa7531b5cc04ec2a1f391bda88b68d5"} Feb 17 13:49:01 crc kubenswrapper[4833]: I0217 13:49:01.113741 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56dc9575cd-587k9" event={"ID":"2456e646-8fbd-4eb4-903a-30ad35d77190","Type":"ContainerStarted","Data":"8d75e219d064f54c5d0f8d273ca2c2eb726da1d2cc63d2da84731adf1448d89d"} Feb 17 13:49:01 crc kubenswrapper[4833]: I0217 13:49:01.114105 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-56dc9575cd-587k9" Feb 17 13:49:01 crc kubenswrapper[4833]: I0217 13:49:01.119728 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6ccff8795d-2nlbt" event={"ID":"93f4d6dd-4f21-458d-8e36-af26772778e9","Type":"ContainerStarted","Data":"d32945bc08a94659fc2151b378a9d9911f147b35cb0c4fea6532ed65776dcd3c"} Feb 17 13:49:01 crc kubenswrapper[4833]: I0217 13:49:01.119764 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6ccff8795d-2nlbt" event={"ID":"93f4d6dd-4f21-458d-8e36-af26772778e9","Type":"ContainerStarted","Data":"83afde5305738c9eeadb76082f565e0db734eeddaf8005ee603ef2ad592d60e3"} Feb 17 13:49:01 crc kubenswrapper[4833]: I0217 13:49:01.120583 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6ccff8795d-2nlbt" Feb 17 13:49:01 crc kubenswrapper[4833]: I0217 13:49:01.121864 4833 patch_prober.go:28] interesting pod/route-controller-manager-6ccff8795d-2nlbt container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": dial tcp 10.217.0.65:8443: connect: connection refused" start-of-body= Feb 17 13:49:01 crc kubenswrapper[4833]: I0217 13:49:01.121923 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6ccff8795d-2nlbt" podUID="93f4d6dd-4f21-458d-8e36-af26772778e9" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": dial tcp 10.217.0.65:8443: connect: connection refused" Feb 17 13:49:01 crc kubenswrapper[4833]: I0217 13:49:01.124253 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-56dc9575cd-587k9" Feb 17 13:49:01 crc kubenswrapper[4833]: I0217 13:49:01.138399 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-56dc9575cd-587k9" podStartSLOduration=3.138384887 podStartE2EDuration="3.138384887s" podCreationTimestamp="2026-02-17 13:48:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:49:01.135883378 +0000 UTC m=+230.770982821" watchObservedRunningTime="2026-02-17 13:49:01.138384887 +0000 UTC m=+230.773484320" Feb 17 13:49:01 crc kubenswrapper[4833]: I0217 13:49:01.185688 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6ccff8795d-2nlbt" podStartSLOduration=3.18567284 podStartE2EDuration="3.18567284s" podCreationTimestamp="2026-02-17 13:48:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:49:01.184001478 +0000 UTC m=+230.819100911" watchObservedRunningTime="2026-02-17 13:49:01.18567284 +0000 UTC m=+230.820772273" Feb 17 13:49:02 crc kubenswrapper[4833]: I0217 13:49:02.129432 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6ccff8795d-2nlbt" Feb 17 13:49:08 crc kubenswrapper[4833]: I0217 13:49:08.969115 4833 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 17 13:49:08 crc kubenswrapper[4833]: E0217 13:49:08.970985 4833 file.go:109] "Unable to process watch event" err="can't process config file \"/etc/kubernetes/manifests/kube-apiserver-pod.yaml\": /etc/kubernetes/manifests/kube-apiserver-pod.yaml: couldn't parse as pod(Object 'Kind' is missing in 'null'), please check config file" Feb 17 13:49:08 crc kubenswrapper[4833]: I0217 13:49:08.971214 4833 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 17 13:49:08 crc kubenswrapper[4833]: I0217 13:49:08.971437 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://8f18bb5a144ef41de921b7b92ee560b76e3f9f9a626be4d5638db46e4688d256" gracePeriod=15 Feb 17 13:49:08 crc kubenswrapper[4833]: I0217 13:49:08.971817 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 13:49:08 crc kubenswrapper[4833]: I0217 13:49:08.971996 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://436df1e343e5ca9347dd98b9da03c74816b0bc0ec6cecd94dc3b21c38eea9b51" gracePeriod=15 Feb 17 13:49:08 crc kubenswrapper[4833]: I0217 13:49:08.972124 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://84373535efe637bbd5cf8649523941f0e73ad51f06dd45c913741e2a8e7b775f" gracePeriod=15 Feb 17 13:49:08 crc kubenswrapper[4833]: I0217 13:49:08.972179 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://9f0450fd6122e12877e07084f693f67e63ac70bcecc962f7da5a32241ba14553" gracePeriod=15 Feb 17 13:49:08 crc kubenswrapper[4833]: I0217 13:49:08.972256 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://090ad37f43c1a891b1a2922b2a367e1b52ce56fd6a53e9e749f819bf281136a2" gracePeriod=15 Feb 17 13:49:08 crc kubenswrapper[4833]: I0217 13:49:08.973960 4833 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 17 13:49:08 crc kubenswrapper[4833]: E0217 13:49:08.979110 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 17 13:49:08 crc kubenswrapper[4833]: I0217 13:49:08.979173 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 17 13:49:08 crc kubenswrapper[4833]: E0217 13:49:08.979194 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 17 13:49:08 crc kubenswrapper[4833]: I0217 13:49:08.979205 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 17 13:49:08 crc kubenswrapper[4833]: E0217 13:49:08.979242 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 13:49:08 crc kubenswrapper[4833]: I0217 13:49:08.979251 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 13:49:08 crc kubenswrapper[4833]: E0217 13:49:08.979265 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 17 13:49:08 crc kubenswrapper[4833]: I0217 13:49:08.979274 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 17 13:49:08 crc kubenswrapper[4833]: E0217 13:49:08.979312 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 17 13:49:08 crc kubenswrapper[4833]: I0217 13:49:08.979321 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 17 13:49:08 crc kubenswrapper[4833]: E0217 13:49:08.979333 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 13:49:08 crc kubenswrapper[4833]: I0217 13:49:08.979342 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 13:49:08 crc kubenswrapper[4833]: E0217 13:49:08.979356 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 17 13:49:08 crc kubenswrapper[4833]: I0217 13:49:08.979364 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 17 13:49:08 crc kubenswrapper[4833]: I0217 13:49:08.979527 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 17 13:49:08 crc kubenswrapper[4833]: I0217 13:49:08.979566 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 17 13:49:08 crc kubenswrapper[4833]: I0217 13:49:08.979581 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 17 13:49:08 crc kubenswrapper[4833]: I0217 13:49:08.979594 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 17 13:49:08 crc kubenswrapper[4833]: I0217 13:49:08.979607 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 13:49:08 crc kubenswrapper[4833]: I0217 13:49:08.979638 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 13:49:09 crc kubenswrapper[4833]: E0217 13:49:09.011779 4833 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.243:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 13:49:09 crc kubenswrapper[4833]: I0217 13:49:09.068362 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 13:49:09 crc kubenswrapper[4833]: I0217 13:49:09.068418 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:49:09 crc kubenswrapper[4833]: I0217 13:49:09.068455 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 13:49:09 crc kubenswrapper[4833]: I0217 13:49:09.068628 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:49:09 crc kubenswrapper[4833]: I0217 13:49:09.068702 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 13:49:09 crc kubenswrapper[4833]: I0217 13:49:09.068768 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:49:09 crc kubenswrapper[4833]: I0217 13:49:09.068813 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 13:49:09 crc kubenswrapper[4833]: I0217 13:49:09.068848 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 13:49:09 crc kubenswrapper[4833]: I0217 13:49:09.164094 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 17 13:49:09 crc kubenswrapper[4833]: I0217 13:49:09.165790 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 17 13:49:09 crc kubenswrapper[4833]: I0217 13:49:09.166897 4833 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="436df1e343e5ca9347dd98b9da03c74816b0bc0ec6cecd94dc3b21c38eea9b51" exitCode=0 Feb 17 13:49:09 crc kubenswrapper[4833]: I0217 13:49:09.166927 4833 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9f0450fd6122e12877e07084f693f67e63ac70bcecc962f7da5a32241ba14553" exitCode=0 Feb 17 13:49:09 crc kubenswrapper[4833]: I0217 13:49:09.166940 4833 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="84373535efe637bbd5cf8649523941f0e73ad51f06dd45c913741e2a8e7b775f" exitCode=0 Feb 17 13:49:09 crc kubenswrapper[4833]: I0217 13:49:09.166953 4833 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="090ad37f43c1a891b1a2922b2a367e1b52ce56fd6a53e9e749f819bf281136a2" exitCode=2 Feb 17 13:49:09 crc kubenswrapper[4833]: I0217 13:49:09.167032 4833 scope.go:117] "RemoveContainer" containerID="babf66dd8ad93003e9766e885c7626d4f6dab5bc46a6d714449a01065d7afea9" Feb 17 13:49:09 crc kubenswrapper[4833]: I0217 13:49:09.170136 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 13:49:09 crc kubenswrapper[4833]: I0217 13:49:09.170180 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:49:09 crc kubenswrapper[4833]: I0217 13:49:09.170207 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 13:49:09 crc kubenswrapper[4833]: I0217 13:49:09.170224 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 13:49:09 crc kubenswrapper[4833]: I0217 13:49:09.170228 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 13:49:09 crc kubenswrapper[4833]: I0217 13:49:09.170320 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:49:09 crc kubenswrapper[4833]: I0217 13:49:09.170391 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 13:49:09 crc kubenswrapper[4833]: I0217 13:49:09.170397 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 13:49:09 crc kubenswrapper[4833]: I0217 13:49:09.170464 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:49:09 crc kubenswrapper[4833]: I0217 13:49:09.170472 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 13:49:09 crc kubenswrapper[4833]: I0217 13:49:09.170526 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 13:49:09 crc kubenswrapper[4833]: I0217 13:49:09.170568 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:49:09 crc kubenswrapper[4833]: I0217 13:49:09.170606 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:49:09 crc kubenswrapper[4833]: I0217 13:49:09.170422 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 13:49:09 crc kubenswrapper[4833]: I0217 13:49:09.170650 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:49:09 crc kubenswrapper[4833]: I0217 13:49:09.170708 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 13:49:09 crc kubenswrapper[4833]: I0217 13:49:09.312245 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 13:49:09 crc kubenswrapper[4833]: W0217 13:49:09.332197 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-25bf7b6e0052a4c7874f15b230c34f7c46688dd4c13e081cd2c98faafdf6922a WatchSource:0}: Error finding container 25bf7b6e0052a4c7874f15b230c34f7c46688dd4c13e081cd2c98faafdf6922a: Status 404 returned error can't find the container with id 25bf7b6e0052a4c7874f15b230c34f7c46688dd4c13e081cd2c98faafdf6922a Feb 17 13:49:09 crc kubenswrapper[4833]: E0217 13:49:09.336175 4833 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.243:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18950cd7c6f48b64 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 13:49:09.335575396 +0000 UTC m=+238.970674839,LastTimestamp:2026-02-17 13:49:09.335575396 +0000 UTC m=+238.970674839,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 13:49:09 crc kubenswrapper[4833]: E0217 13:49:09.883506 4833 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.243:6443: connect: connection refused" Feb 17 13:49:09 crc kubenswrapper[4833]: E0217 13:49:09.884424 4833 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.243:6443: connect: connection refused" Feb 17 13:49:09 crc kubenswrapper[4833]: E0217 13:49:09.885018 4833 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.243:6443: connect: connection refused" Feb 17 13:49:09 crc kubenswrapper[4833]: E0217 13:49:09.885401 4833 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.243:6443: connect: connection refused" Feb 17 13:49:09 crc kubenswrapper[4833]: E0217 13:49:09.885896 4833 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.243:6443: connect: connection refused" Feb 17 13:49:09 crc kubenswrapper[4833]: I0217 13:49:09.885936 4833 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 17 13:49:09 crc kubenswrapper[4833]: E0217 13:49:09.886370 4833 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.243:6443: connect: connection refused" interval="200ms" Feb 17 13:49:10 crc kubenswrapper[4833]: E0217 13:49:10.087968 4833 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.243:6443: connect: connection refused" interval="400ms" Feb 17 13:49:10 crc kubenswrapper[4833]: I0217 13:49:10.174838 4833 generic.go:334] "Generic (PLEG): container finished" podID="42b66026-007c-4217-a549-f108d5c880e5" containerID="ca0ad02c4261aec9f4c943877bbed2fc17b9679b990202c6de26446668167c20" exitCode=0 Feb 17 13:49:10 crc kubenswrapper[4833]: I0217 13:49:10.174915 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"42b66026-007c-4217-a549-f108d5c880e5","Type":"ContainerDied","Data":"ca0ad02c4261aec9f4c943877bbed2fc17b9679b990202c6de26446668167c20"} Feb 17 13:49:10 crc kubenswrapper[4833]: I0217 13:49:10.176299 4833 status_manager.go:851] "Failed to get status for pod" podUID="42b66026-007c-4217-a549-f108d5c880e5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Feb 17 13:49:10 crc kubenswrapper[4833]: I0217 13:49:10.178533 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 17 13:49:10 crc kubenswrapper[4833]: I0217 13:49:10.181292 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"e5638dcaff688dc06379447e219950f594937d7643ce6947869ba3d1c9c8b3ac"} Feb 17 13:49:10 crc kubenswrapper[4833]: I0217 13:49:10.181351 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"25bf7b6e0052a4c7874f15b230c34f7c46688dd4c13e081cd2c98faafdf6922a"} Feb 17 13:49:10 crc kubenswrapper[4833]: E0217 13:49:10.182068 4833 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.243:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 13:49:10 crc kubenswrapper[4833]: I0217 13:49:10.182219 4833 status_manager.go:851] "Failed to get status for pod" podUID="42b66026-007c-4217-a549-f108d5c880e5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Feb 17 13:49:10 crc kubenswrapper[4833]: E0217 13:49:10.489073 4833 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.243:6443: connect: connection refused" interval="800ms" Feb 17 13:49:11 crc kubenswrapper[4833]: I0217 13:49:11.046472 4833 status_manager.go:851] "Failed to get status for pod" podUID="42b66026-007c-4217-a549-f108d5c880e5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Feb 17 13:49:11 crc kubenswrapper[4833]: E0217 13:49:11.230409 4833 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.243:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18950cd7c6f48b64 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 13:49:09.335575396 +0000 UTC m=+238.970674839,LastTimestamp:2026-02-17 13:49:09.335575396 +0000 UTC m=+238.970674839,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 13:49:11 crc kubenswrapper[4833]: E0217 13:49:11.289721 4833 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.243:6443: connect: connection refused" interval="1.6s" Feb 17 13:49:11 crc kubenswrapper[4833]: I0217 13:49:11.351747 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 17 13:49:11 crc kubenswrapper[4833]: I0217 13:49:11.352511 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:49:11 crc kubenswrapper[4833]: I0217 13:49:11.353027 4833 status_manager.go:851] "Failed to get status for pod" podUID="42b66026-007c-4217-a549-f108d5c880e5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Feb 17 13:49:11 crc kubenswrapper[4833]: I0217 13:49:11.355404 4833 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Feb 17 13:49:11 crc kubenswrapper[4833]: I0217 13:49:11.423125 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 17 13:49:11 crc kubenswrapper[4833]: I0217 13:49:11.423239 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 13:49:11 crc kubenswrapper[4833]: I0217 13:49:11.423275 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 17 13:49:11 crc kubenswrapper[4833]: I0217 13:49:11.423366 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 13:49:11 crc kubenswrapper[4833]: I0217 13:49:11.423407 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 17 13:49:11 crc kubenswrapper[4833]: I0217 13:49:11.423534 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 13:49:11 crc kubenswrapper[4833]: I0217 13:49:11.423734 4833 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 17 13:49:11 crc kubenswrapper[4833]: I0217 13:49:11.423759 4833 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 17 13:49:11 crc kubenswrapper[4833]: I0217 13:49:11.423772 4833 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 17 13:49:11 crc kubenswrapper[4833]: I0217 13:49:11.522917 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 17 13:49:11 crc kubenswrapper[4833]: I0217 13:49:11.523699 4833 status_manager.go:851] "Failed to get status for pod" podUID="42b66026-007c-4217-a549-f108d5c880e5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Feb 17 13:49:11 crc kubenswrapper[4833]: I0217 13:49:11.524154 4833 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Feb 17 13:49:11 crc kubenswrapper[4833]: I0217 13:49:11.625308 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/42b66026-007c-4217-a549-f108d5c880e5-kubelet-dir\") pod \"42b66026-007c-4217-a549-f108d5c880e5\" (UID: \"42b66026-007c-4217-a549-f108d5c880e5\") " Feb 17 13:49:11 crc kubenswrapper[4833]: I0217 13:49:11.625395 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/42b66026-007c-4217-a549-f108d5c880e5-var-lock\") pod \"42b66026-007c-4217-a549-f108d5c880e5\" (UID: \"42b66026-007c-4217-a549-f108d5c880e5\") " Feb 17 13:49:11 crc kubenswrapper[4833]: I0217 13:49:11.625440 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/42b66026-007c-4217-a549-f108d5c880e5-kube-api-access\") pod \"42b66026-007c-4217-a549-f108d5c880e5\" (UID: \"42b66026-007c-4217-a549-f108d5c880e5\") " Feb 17 13:49:11 crc kubenswrapper[4833]: I0217 13:49:11.625808 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42b66026-007c-4217-a549-f108d5c880e5-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "42b66026-007c-4217-a549-f108d5c880e5" (UID: "42b66026-007c-4217-a549-f108d5c880e5"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 13:49:11 crc kubenswrapper[4833]: I0217 13:49:11.625875 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42b66026-007c-4217-a549-f108d5c880e5-var-lock" (OuterVolumeSpecName: "var-lock") pod "42b66026-007c-4217-a549-f108d5c880e5" (UID: "42b66026-007c-4217-a549-f108d5c880e5"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 13:49:11 crc kubenswrapper[4833]: I0217 13:49:11.633292 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42b66026-007c-4217-a549-f108d5c880e5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "42b66026-007c-4217-a549-f108d5c880e5" (UID: "42b66026-007c-4217-a549-f108d5c880e5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:49:11 crc kubenswrapper[4833]: I0217 13:49:11.726852 4833 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/42b66026-007c-4217-a549-f108d5c880e5-var-lock\") on node \"crc\" DevicePath \"\"" Feb 17 13:49:11 crc kubenswrapper[4833]: I0217 13:49:11.726926 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/42b66026-007c-4217-a549-f108d5c880e5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 13:49:11 crc kubenswrapper[4833]: I0217 13:49:11.726956 4833 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/42b66026-007c-4217-a549-f108d5c880e5-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 17 13:49:12 crc kubenswrapper[4833]: I0217 13:49:12.195384 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"42b66026-007c-4217-a549-f108d5c880e5","Type":"ContainerDied","Data":"9d26d1536dc65aaf3959181781e2a8a5f7deeb0c4f14bcc8f868bc962e79464f"} Feb 17 13:49:12 crc kubenswrapper[4833]: I0217 13:49:12.195718 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d26d1536dc65aaf3959181781e2a8a5f7deeb0c4f14bcc8f868bc962e79464f" Feb 17 13:49:12 crc kubenswrapper[4833]: I0217 13:49:12.195773 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 17 13:49:12 crc kubenswrapper[4833]: I0217 13:49:12.200434 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 17 13:49:12 crc kubenswrapper[4833]: I0217 13:49:12.201520 4833 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8f18bb5a144ef41de921b7b92ee560b76e3f9f9a626be4d5638db46e4688d256" exitCode=0 Feb 17 13:49:12 crc kubenswrapper[4833]: I0217 13:49:12.201642 4833 scope.go:117] "RemoveContainer" containerID="436df1e343e5ca9347dd98b9da03c74816b0bc0ec6cecd94dc3b21c38eea9b51" Feb 17 13:49:12 crc kubenswrapper[4833]: I0217 13:49:12.201684 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:49:12 crc kubenswrapper[4833]: I0217 13:49:12.224091 4833 status_manager.go:851] "Failed to get status for pod" podUID="42b66026-007c-4217-a549-f108d5c880e5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Feb 17 13:49:12 crc kubenswrapper[4833]: I0217 13:49:12.224385 4833 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Feb 17 13:49:12 crc kubenswrapper[4833]: I0217 13:49:12.224732 4833 status_manager.go:851] "Failed to get status for pod" podUID="42b66026-007c-4217-a549-f108d5c880e5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Feb 17 13:49:12 crc kubenswrapper[4833]: I0217 13:49:12.225087 4833 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Feb 17 13:49:12 crc kubenswrapper[4833]: I0217 13:49:12.227692 4833 scope.go:117] "RemoveContainer" containerID="9f0450fd6122e12877e07084f693f67e63ac70bcecc962f7da5a32241ba14553" Feb 17 13:49:12 crc kubenswrapper[4833]: I0217 13:49:12.251773 4833 scope.go:117] "RemoveContainer" containerID="84373535efe637bbd5cf8649523941f0e73ad51f06dd45c913741e2a8e7b775f" Feb 17 13:49:12 crc kubenswrapper[4833]: I0217 13:49:12.268311 4833 scope.go:117] "RemoveContainer" containerID="090ad37f43c1a891b1a2922b2a367e1b52ce56fd6a53e9e749f819bf281136a2" Feb 17 13:49:12 crc kubenswrapper[4833]: I0217 13:49:12.287120 4833 scope.go:117] "RemoveContainer" containerID="8f18bb5a144ef41de921b7b92ee560b76e3f9f9a626be4d5638db46e4688d256" Feb 17 13:49:12 crc kubenswrapper[4833]: I0217 13:49:12.308129 4833 scope.go:117] "RemoveContainer" containerID="208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272" Feb 17 13:49:12 crc kubenswrapper[4833]: I0217 13:49:12.329934 4833 scope.go:117] "RemoveContainer" containerID="436df1e343e5ca9347dd98b9da03c74816b0bc0ec6cecd94dc3b21c38eea9b51" Feb 17 13:49:12 crc kubenswrapper[4833]: E0217 13:49:12.330411 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"436df1e343e5ca9347dd98b9da03c74816b0bc0ec6cecd94dc3b21c38eea9b51\": container with ID starting with 436df1e343e5ca9347dd98b9da03c74816b0bc0ec6cecd94dc3b21c38eea9b51 not found: ID does not exist" containerID="436df1e343e5ca9347dd98b9da03c74816b0bc0ec6cecd94dc3b21c38eea9b51" Feb 17 13:49:12 crc kubenswrapper[4833]: I0217 13:49:12.330451 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"436df1e343e5ca9347dd98b9da03c74816b0bc0ec6cecd94dc3b21c38eea9b51"} err="failed to get container status \"436df1e343e5ca9347dd98b9da03c74816b0bc0ec6cecd94dc3b21c38eea9b51\": rpc error: code = NotFound desc = could not find container \"436df1e343e5ca9347dd98b9da03c74816b0bc0ec6cecd94dc3b21c38eea9b51\": container with ID starting with 436df1e343e5ca9347dd98b9da03c74816b0bc0ec6cecd94dc3b21c38eea9b51 not found: ID does not exist" Feb 17 13:49:12 crc kubenswrapper[4833]: I0217 13:49:12.330479 4833 scope.go:117] "RemoveContainer" containerID="9f0450fd6122e12877e07084f693f67e63ac70bcecc962f7da5a32241ba14553" Feb 17 13:49:12 crc kubenswrapper[4833]: E0217 13:49:12.330750 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f0450fd6122e12877e07084f693f67e63ac70bcecc962f7da5a32241ba14553\": container with ID starting with 9f0450fd6122e12877e07084f693f67e63ac70bcecc962f7da5a32241ba14553 not found: ID does not exist" containerID="9f0450fd6122e12877e07084f693f67e63ac70bcecc962f7da5a32241ba14553" Feb 17 13:49:12 crc kubenswrapper[4833]: I0217 13:49:12.330784 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f0450fd6122e12877e07084f693f67e63ac70bcecc962f7da5a32241ba14553"} err="failed to get container status \"9f0450fd6122e12877e07084f693f67e63ac70bcecc962f7da5a32241ba14553\": rpc error: code = NotFound desc = could not find container \"9f0450fd6122e12877e07084f693f67e63ac70bcecc962f7da5a32241ba14553\": container with ID starting with 9f0450fd6122e12877e07084f693f67e63ac70bcecc962f7da5a32241ba14553 not found: ID does not exist" Feb 17 13:49:12 crc kubenswrapper[4833]: I0217 13:49:12.330806 4833 scope.go:117] "RemoveContainer" containerID="84373535efe637bbd5cf8649523941f0e73ad51f06dd45c913741e2a8e7b775f" Feb 17 13:49:12 crc kubenswrapper[4833]: E0217 13:49:12.331016 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84373535efe637bbd5cf8649523941f0e73ad51f06dd45c913741e2a8e7b775f\": container with ID starting with 84373535efe637bbd5cf8649523941f0e73ad51f06dd45c913741e2a8e7b775f not found: ID does not exist" containerID="84373535efe637bbd5cf8649523941f0e73ad51f06dd45c913741e2a8e7b775f" Feb 17 13:49:12 crc kubenswrapper[4833]: I0217 13:49:12.331071 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84373535efe637bbd5cf8649523941f0e73ad51f06dd45c913741e2a8e7b775f"} err="failed to get container status \"84373535efe637bbd5cf8649523941f0e73ad51f06dd45c913741e2a8e7b775f\": rpc error: code = NotFound desc = could not find container \"84373535efe637bbd5cf8649523941f0e73ad51f06dd45c913741e2a8e7b775f\": container with ID starting with 84373535efe637bbd5cf8649523941f0e73ad51f06dd45c913741e2a8e7b775f not found: ID does not exist" Feb 17 13:49:12 crc kubenswrapper[4833]: I0217 13:49:12.331087 4833 scope.go:117] "RemoveContainer" containerID="090ad37f43c1a891b1a2922b2a367e1b52ce56fd6a53e9e749f819bf281136a2" Feb 17 13:49:12 crc kubenswrapper[4833]: E0217 13:49:12.331267 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"090ad37f43c1a891b1a2922b2a367e1b52ce56fd6a53e9e749f819bf281136a2\": container with ID starting with 090ad37f43c1a891b1a2922b2a367e1b52ce56fd6a53e9e749f819bf281136a2 not found: ID does not exist" containerID="090ad37f43c1a891b1a2922b2a367e1b52ce56fd6a53e9e749f819bf281136a2" Feb 17 13:49:12 crc kubenswrapper[4833]: I0217 13:49:12.331291 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"090ad37f43c1a891b1a2922b2a367e1b52ce56fd6a53e9e749f819bf281136a2"} err="failed to get container status \"090ad37f43c1a891b1a2922b2a367e1b52ce56fd6a53e9e749f819bf281136a2\": rpc error: code = NotFound desc = could not find container \"090ad37f43c1a891b1a2922b2a367e1b52ce56fd6a53e9e749f819bf281136a2\": container with ID starting with 090ad37f43c1a891b1a2922b2a367e1b52ce56fd6a53e9e749f819bf281136a2 not found: ID does not exist" Feb 17 13:49:12 crc kubenswrapper[4833]: I0217 13:49:12.331304 4833 scope.go:117] "RemoveContainer" containerID="8f18bb5a144ef41de921b7b92ee560b76e3f9f9a626be4d5638db46e4688d256" Feb 17 13:49:12 crc kubenswrapper[4833]: E0217 13:49:12.331503 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f18bb5a144ef41de921b7b92ee560b76e3f9f9a626be4d5638db46e4688d256\": container with ID starting with 8f18bb5a144ef41de921b7b92ee560b76e3f9f9a626be4d5638db46e4688d256 not found: ID does not exist" containerID="8f18bb5a144ef41de921b7b92ee560b76e3f9f9a626be4d5638db46e4688d256" Feb 17 13:49:12 crc kubenswrapper[4833]: I0217 13:49:12.331526 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f18bb5a144ef41de921b7b92ee560b76e3f9f9a626be4d5638db46e4688d256"} err="failed to get container status \"8f18bb5a144ef41de921b7b92ee560b76e3f9f9a626be4d5638db46e4688d256\": rpc error: code = NotFound desc = could not find container \"8f18bb5a144ef41de921b7b92ee560b76e3f9f9a626be4d5638db46e4688d256\": container with ID starting with 8f18bb5a144ef41de921b7b92ee560b76e3f9f9a626be4d5638db46e4688d256 not found: ID does not exist" Feb 17 13:49:12 crc kubenswrapper[4833]: I0217 13:49:12.331539 4833 scope.go:117] "RemoveContainer" containerID="208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272" Feb 17 13:49:12 crc kubenswrapper[4833]: E0217 13:49:12.331762 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\": container with ID starting with 208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272 not found: ID does not exist" containerID="208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272" Feb 17 13:49:12 crc kubenswrapper[4833]: I0217 13:49:12.331781 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272"} err="failed to get container status \"208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\": rpc error: code = NotFound desc = could not find container \"208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272\": container with ID starting with 208a04a1cf503b81e07a99db5edb62e3586825bbc18c11db989bf73d7e2ba272 not found: ID does not exist" Feb 17 13:49:12 crc kubenswrapper[4833]: E0217 13:49:12.891306 4833 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.243:6443: connect: connection refused" interval="3.2s" Feb 17 13:49:13 crc kubenswrapper[4833]: I0217 13:49:13.047515 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 17 13:49:15 crc kubenswrapper[4833]: E0217 13:49:15.076705 4833 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.243:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" volumeName="registry-storage" Feb 17 13:49:16 crc kubenswrapper[4833]: E0217 13:49:16.093440 4833 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.243:6443: connect: connection refused" interval="6.4s" Feb 17 13:49:21 crc kubenswrapper[4833]: I0217 13:49:21.048565 4833 status_manager.go:851] "Failed to get status for pod" podUID="42b66026-007c-4217-a549-f108d5c880e5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Feb 17 13:49:21 crc kubenswrapper[4833]: E0217 13:49:21.231807 4833 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.243:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18950cd7c6f48b64 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 13:49:09.335575396 +0000 UTC m=+238.970674839,LastTimestamp:2026-02-17 13:49:09.335575396 +0000 UTC m=+238.970674839,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 13:49:22 crc kubenswrapper[4833]: E0217 13:49:22.355564 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:49:22Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:49:22Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:49:22Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:49:22Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.243:6443: connect: connection refused" Feb 17 13:49:22 crc kubenswrapper[4833]: E0217 13:49:22.355923 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.243:6443: connect: connection refused" Feb 17 13:49:22 crc kubenswrapper[4833]: E0217 13:49:22.356467 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.243:6443: connect: connection refused" Feb 17 13:49:22 crc kubenswrapper[4833]: E0217 13:49:22.356884 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.243:6443: connect: connection refused" Feb 17 13:49:22 crc kubenswrapper[4833]: E0217 13:49:22.357466 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.243:6443: connect: connection refused" Feb 17 13:49:22 crc kubenswrapper[4833]: E0217 13:49:22.357503 4833 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 13:49:22 crc kubenswrapper[4833]: E0217 13:49:22.494469 4833 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.243:6443: connect: connection refused" interval="7s" Feb 17 13:49:23 crc kubenswrapper[4833]: I0217 13:49:23.041180 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:49:23 crc kubenswrapper[4833]: I0217 13:49:23.042593 4833 status_manager.go:851] "Failed to get status for pod" podUID="42b66026-007c-4217-a549-f108d5c880e5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Feb 17 13:49:23 crc kubenswrapper[4833]: I0217 13:49:23.069758 4833 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="23fd4be9-debc-405d-ac05-5a5160593231" Feb 17 13:49:23 crc kubenswrapper[4833]: I0217 13:49:23.069841 4833 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="23fd4be9-debc-405d-ac05-5a5160593231" Feb 17 13:49:23 crc kubenswrapper[4833]: E0217 13:49:23.070635 4833 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:49:23 crc kubenswrapper[4833]: I0217 13:49:23.071392 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:49:23 crc kubenswrapper[4833]: W0217 13:49:23.098475 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-14c026391166c38883473b100aaedf95833dde24e36e9014bab176e6af648a9a WatchSource:0}: Error finding container 14c026391166c38883473b100aaedf95833dde24e36e9014bab176e6af648a9a: Status 404 returned error can't find the container with id 14c026391166c38883473b100aaedf95833dde24e36e9014bab176e6af648a9a Feb 17 13:49:23 crc kubenswrapper[4833]: I0217 13:49:23.270482 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 17 13:49:23 crc kubenswrapper[4833]: I0217 13:49:23.270735 4833 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="0bb5bc7e6f182dd6271a6d16d8441e1f6833a32fd03858a3b91d16a896339c2b" exitCode=1 Feb 17 13:49:23 crc kubenswrapper[4833]: I0217 13:49:23.270787 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"0bb5bc7e6f182dd6271a6d16d8441e1f6833a32fd03858a3b91d16a896339c2b"} Feb 17 13:49:23 crc kubenswrapper[4833]: I0217 13:49:23.271189 4833 scope.go:117] "RemoveContainer" containerID="0bb5bc7e6f182dd6271a6d16d8441e1f6833a32fd03858a3b91d16a896339c2b" Feb 17 13:49:23 crc kubenswrapper[4833]: I0217 13:49:23.271643 4833 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Feb 17 13:49:23 crc kubenswrapper[4833]: I0217 13:49:23.271973 4833 status_manager.go:851] "Failed to get status for pod" podUID="42b66026-007c-4217-a549-f108d5c880e5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Feb 17 13:49:23 crc kubenswrapper[4833]: I0217 13:49:23.273700 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"14c026391166c38883473b100aaedf95833dde24e36e9014bab176e6af648a9a"} Feb 17 13:49:24 crc kubenswrapper[4833]: I0217 13:49:24.288207 4833 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="02fae739d91eb5218778cf8c69934f465c48762ad565d824d2e66eea75a693f4" exitCode=0 Feb 17 13:49:24 crc kubenswrapper[4833]: I0217 13:49:24.288254 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"02fae739d91eb5218778cf8c69934f465c48762ad565d824d2e66eea75a693f4"} Feb 17 13:49:24 crc kubenswrapper[4833]: I0217 13:49:24.288591 4833 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="23fd4be9-debc-405d-ac05-5a5160593231" Feb 17 13:49:24 crc kubenswrapper[4833]: I0217 13:49:24.288619 4833 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="23fd4be9-debc-405d-ac05-5a5160593231" Feb 17 13:49:24 crc kubenswrapper[4833]: I0217 13:49:24.288957 4833 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Feb 17 13:49:24 crc kubenswrapper[4833]: E0217 13:49:24.288953 4833 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:49:24 crc kubenswrapper[4833]: I0217 13:49:24.289792 4833 status_manager.go:851] "Failed to get status for pod" podUID="42b66026-007c-4217-a549-f108d5c880e5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Feb 17 13:49:24 crc kubenswrapper[4833]: I0217 13:49:24.292942 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 17 13:49:24 crc kubenswrapper[4833]: I0217 13:49:24.292987 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"adc12ac5b2b5d6c95131b27421ccd92cca8274fb3bcefda641bbce70168127f3"} Feb 17 13:49:24 crc kubenswrapper[4833]: I0217 13:49:24.293897 4833 status_manager.go:851] "Failed to get status for pod" podUID="42b66026-007c-4217-a549-f108d5c880e5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Feb 17 13:49:24 crc kubenswrapper[4833]: I0217 13:49:24.294463 4833 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Feb 17 13:49:25 crc kubenswrapper[4833]: I0217 13:49:25.305327 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ad7fdf79988723a541db788ccade1a27dd2334f485f7708bc623230a0c1828da"} Feb 17 13:49:25 crc kubenswrapper[4833]: I0217 13:49:25.305801 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9fde848cf0240e04b64334e9eadd4a16af53f7db282a53c1cac793d416b1ce27"} Feb 17 13:49:25 crc kubenswrapper[4833]: I0217 13:49:25.305819 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"7c871f2b362d240dad4c77ebda45c70f509ff241d43ec6f05425f596403d4801"} Feb 17 13:49:26 crc kubenswrapper[4833]: I0217 13:49:26.323490 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c26e1c1c5fd9cd1883986c4f53c9ffe6a8749d50580e8d82270d4d4b3f0527bf"} Feb 17 13:49:26 crc kubenswrapper[4833]: I0217 13:49:26.323533 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b3d9a68739dbe12279b6a85da6c3c814df46344e3615f4c17ac3c9c60cecfb79"} Feb 17 13:49:26 crc kubenswrapper[4833]: I0217 13:49:26.323905 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:49:26 crc kubenswrapper[4833]: I0217 13:49:26.324032 4833 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="23fd4be9-debc-405d-ac05-5a5160593231" Feb 17 13:49:26 crc kubenswrapper[4833]: I0217 13:49:26.324077 4833 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="23fd4be9-debc-405d-ac05-5a5160593231" Feb 17 13:49:26 crc kubenswrapper[4833]: I0217 13:49:26.672095 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 13:49:26 crc kubenswrapper[4833]: I0217 13:49:26.678078 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 13:49:27 crc kubenswrapper[4833]: I0217 13:49:27.329289 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 13:49:28 crc kubenswrapper[4833]: I0217 13:49:28.072092 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:49:28 crc kubenswrapper[4833]: I0217 13:49:28.072489 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:49:28 crc kubenswrapper[4833]: I0217 13:49:28.081537 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:49:31 crc kubenswrapper[4833]: I0217 13:49:31.354202 4833 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:49:31 crc kubenswrapper[4833]: I0217 13:49:31.445200 4833 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="9fb8156c-1fd8-44ab-9973-ba20594413fe" Feb 17 13:49:32 crc kubenswrapper[4833]: I0217 13:49:32.363241 4833 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="23fd4be9-debc-405d-ac05-5a5160593231" Feb 17 13:49:32 crc kubenswrapper[4833]: I0217 13:49:32.363305 4833 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="23fd4be9-debc-405d-ac05-5a5160593231" Feb 17 13:49:32 crc kubenswrapper[4833]: I0217 13:49:32.366946 4833 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="9fb8156c-1fd8-44ab-9973-ba20594413fe" Feb 17 13:49:32 crc kubenswrapper[4833]: I0217 13:49:32.369560 4833 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://7c871f2b362d240dad4c77ebda45c70f509ff241d43ec6f05425f596403d4801" Feb 17 13:49:32 crc kubenswrapper[4833]: I0217 13:49:32.369599 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:49:33 crc kubenswrapper[4833]: I0217 13:49:33.373943 4833 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="23fd4be9-debc-405d-ac05-5a5160593231" Feb 17 13:49:33 crc kubenswrapper[4833]: I0217 13:49:33.374257 4833 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="23fd4be9-debc-405d-ac05-5a5160593231" Feb 17 13:49:33 crc kubenswrapper[4833]: I0217 13:49:33.377829 4833 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="9fb8156c-1fd8-44ab-9973-ba20594413fe" Feb 17 13:49:38 crc kubenswrapper[4833]: I0217 13:49:38.361929 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 13:49:41 crc kubenswrapper[4833]: I0217 13:49:41.652885 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 17 13:49:41 crc kubenswrapper[4833]: I0217 13:49:41.693537 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 17 13:49:42 crc kubenswrapper[4833]: I0217 13:49:42.043938 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 17 13:49:42 crc kubenswrapper[4833]: I0217 13:49:42.142025 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 17 13:49:42 crc kubenswrapper[4833]: I0217 13:49:42.205499 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 17 13:49:42 crc kubenswrapper[4833]: I0217 13:49:42.236880 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 17 13:49:42 crc kubenswrapper[4833]: I0217 13:49:42.251839 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 17 13:49:42 crc kubenswrapper[4833]: I0217 13:49:42.324147 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 17 13:49:42 crc kubenswrapper[4833]: I0217 13:49:42.680346 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 17 13:49:42 crc kubenswrapper[4833]: I0217 13:49:42.793884 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 17 13:49:42 crc kubenswrapper[4833]: I0217 13:49:42.813030 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 17 13:49:43 crc kubenswrapper[4833]: I0217 13:49:43.079381 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 17 13:49:43 crc kubenswrapper[4833]: I0217 13:49:43.111532 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 17 13:49:43 crc kubenswrapper[4833]: I0217 13:49:43.165494 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 17 13:49:43 crc kubenswrapper[4833]: I0217 13:49:43.257811 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 17 13:49:43 crc kubenswrapper[4833]: I0217 13:49:43.267195 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 17 13:49:43 crc kubenswrapper[4833]: I0217 13:49:43.349142 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 17 13:49:43 crc kubenswrapper[4833]: I0217 13:49:43.412319 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 17 13:49:43 crc kubenswrapper[4833]: I0217 13:49:43.451449 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 17 13:49:43 crc kubenswrapper[4833]: I0217 13:49:43.535421 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 17 13:49:43 crc kubenswrapper[4833]: I0217 13:49:43.576143 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 17 13:49:43 crc kubenswrapper[4833]: I0217 13:49:43.750026 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 17 13:49:43 crc kubenswrapper[4833]: I0217 13:49:43.838213 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 17 13:49:43 crc kubenswrapper[4833]: I0217 13:49:43.970387 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 17 13:49:44 crc kubenswrapper[4833]: I0217 13:49:44.300878 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 17 13:49:44 crc kubenswrapper[4833]: I0217 13:49:44.366375 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 17 13:49:44 crc kubenswrapper[4833]: I0217 13:49:44.425846 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 17 13:49:44 crc kubenswrapper[4833]: I0217 13:49:44.456188 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 17 13:49:44 crc kubenswrapper[4833]: I0217 13:49:44.510883 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 17 13:49:44 crc kubenswrapper[4833]: I0217 13:49:44.533148 4833 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 17 13:49:44 crc kubenswrapper[4833]: I0217 13:49:44.588920 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 17 13:49:44 crc kubenswrapper[4833]: I0217 13:49:44.685701 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 17 13:49:44 crc kubenswrapper[4833]: I0217 13:49:44.797867 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 17 13:49:44 crc kubenswrapper[4833]: I0217 13:49:44.811776 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 17 13:49:44 crc kubenswrapper[4833]: I0217 13:49:44.865321 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 17 13:49:45 crc kubenswrapper[4833]: I0217 13:49:45.000758 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 17 13:49:45 crc kubenswrapper[4833]: I0217 13:49:45.090654 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 17 13:49:45 crc kubenswrapper[4833]: I0217 13:49:45.093731 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 17 13:49:45 crc kubenswrapper[4833]: I0217 13:49:45.153438 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 17 13:49:45 crc kubenswrapper[4833]: I0217 13:49:45.176195 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 17 13:49:45 crc kubenswrapper[4833]: I0217 13:49:45.242970 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 17 13:49:45 crc kubenswrapper[4833]: I0217 13:49:45.295354 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 17 13:49:45 crc kubenswrapper[4833]: I0217 13:49:45.313679 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 17 13:49:45 crc kubenswrapper[4833]: I0217 13:49:45.314123 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 17 13:49:45 crc kubenswrapper[4833]: I0217 13:49:45.356514 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 17 13:49:45 crc kubenswrapper[4833]: I0217 13:49:45.357807 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 17 13:49:45 crc kubenswrapper[4833]: I0217 13:49:45.388772 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 17 13:49:45 crc kubenswrapper[4833]: I0217 13:49:45.439933 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 17 13:49:45 crc kubenswrapper[4833]: I0217 13:49:45.463662 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 17 13:49:45 crc kubenswrapper[4833]: I0217 13:49:45.541527 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 17 13:49:45 crc kubenswrapper[4833]: I0217 13:49:45.541566 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 17 13:49:45 crc kubenswrapper[4833]: I0217 13:49:45.601109 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 17 13:49:45 crc kubenswrapper[4833]: I0217 13:49:45.662838 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 17 13:49:45 crc kubenswrapper[4833]: I0217 13:49:45.699089 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 17 13:49:45 crc kubenswrapper[4833]: I0217 13:49:45.709247 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 17 13:49:45 crc kubenswrapper[4833]: I0217 13:49:45.724530 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 17 13:49:45 crc kubenswrapper[4833]: I0217 13:49:45.758618 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 17 13:49:45 crc kubenswrapper[4833]: I0217 13:49:45.806741 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 17 13:49:45 crc kubenswrapper[4833]: I0217 13:49:45.852189 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 17 13:49:45 crc kubenswrapper[4833]: I0217 13:49:45.918357 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 17 13:49:45 crc kubenswrapper[4833]: I0217 13:49:45.924452 4833 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 17 13:49:45 crc kubenswrapper[4833]: I0217 13:49:45.930281 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 17 13:49:45 crc kubenswrapper[4833]: I0217 13:49:45.930339 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 17 13:49:45 crc kubenswrapper[4833]: I0217 13:49:45.936469 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:49:45 crc kubenswrapper[4833]: I0217 13:49:45.952707 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=14.952688028 podStartE2EDuration="14.952688028s" podCreationTimestamp="2026-02-17 13:49:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:49:45.948003518 +0000 UTC m=+275.583102991" watchObservedRunningTime="2026-02-17 13:49:45.952688028 +0000 UTC m=+275.587787491" Feb 17 13:49:45 crc kubenswrapper[4833]: I0217 13:49:45.960448 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 17 13:49:45 crc kubenswrapper[4833]: I0217 13:49:45.984494 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 17 13:49:46 crc kubenswrapper[4833]: I0217 13:49:46.020876 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 17 13:49:46 crc kubenswrapper[4833]: I0217 13:49:46.046430 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 17 13:49:46 crc kubenswrapper[4833]: I0217 13:49:46.053573 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 17 13:49:46 crc kubenswrapper[4833]: I0217 13:49:46.084217 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 17 13:49:46 crc kubenswrapper[4833]: I0217 13:49:46.100176 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 17 13:49:46 crc kubenswrapper[4833]: I0217 13:49:46.241341 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 17 13:49:46 crc kubenswrapper[4833]: I0217 13:49:46.274729 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 17 13:49:46 crc kubenswrapper[4833]: I0217 13:49:46.285818 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 17 13:49:46 crc kubenswrapper[4833]: I0217 13:49:46.395952 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 17 13:49:46 crc kubenswrapper[4833]: I0217 13:49:46.396196 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 17 13:49:46 crc kubenswrapper[4833]: I0217 13:49:46.444180 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 17 13:49:46 crc kubenswrapper[4833]: I0217 13:49:46.456169 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 17 13:49:46 crc kubenswrapper[4833]: I0217 13:49:46.468695 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 17 13:49:46 crc kubenswrapper[4833]: I0217 13:49:46.515974 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 17 13:49:46 crc kubenswrapper[4833]: I0217 13:49:46.598834 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 17 13:49:46 crc kubenswrapper[4833]: I0217 13:49:46.849448 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 17 13:49:46 crc kubenswrapper[4833]: I0217 13:49:46.909128 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 17 13:49:46 crc kubenswrapper[4833]: I0217 13:49:46.978410 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 17 13:49:47 crc kubenswrapper[4833]: I0217 13:49:47.035146 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 17 13:49:47 crc kubenswrapper[4833]: I0217 13:49:47.054674 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 17 13:49:47 crc kubenswrapper[4833]: I0217 13:49:47.166146 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 17 13:49:47 crc kubenswrapper[4833]: I0217 13:49:47.180368 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 17 13:49:47 crc kubenswrapper[4833]: I0217 13:49:47.190408 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 17 13:49:47 crc kubenswrapper[4833]: I0217 13:49:47.228738 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 17 13:49:47 crc kubenswrapper[4833]: I0217 13:49:47.278676 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 17 13:49:47 crc kubenswrapper[4833]: I0217 13:49:47.342371 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 17 13:49:47 crc kubenswrapper[4833]: I0217 13:49:47.461334 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 17 13:49:47 crc kubenswrapper[4833]: I0217 13:49:47.463597 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 17 13:49:47 crc kubenswrapper[4833]: I0217 13:49:47.469594 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 17 13:49:47 crc kubenswrapper[4833]: I0217 13:49:47.553947 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 17 13:49:47 crc kubenswrapper[4833]: I0217 13:49:47.604067 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 17 13:49:47 crc kubenswrapper[4833]: I0217 13:49:47.618354 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 17 13:49:47 crc kubenswrapper[4833]: I0217 13:49:47.680436 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 17 13:49:47 crc kubenswrapper[4833]: I0217 13:49:47.717221 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 17 13:49:47 crc kubenswrapper[4833]: I0217 13:49:47.832307 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 17 13:49:47 crc kubenswrapper[4833]: I0217 13:49:47.870150 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 17 13:49:48 crc kubenswrapper[4833]: I0217 13:49:48.166220 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 17 13:49:48 crc kubenswrapper[4833]: I0217 13:49:48.207685 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 17 13:49:48 crc kubenswrapper[4833]: I0217 13:49:48.213866 4833 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 17 13:49:48 crc kubenswrapper[4833]: I0217 13:49:48.243828 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 17 13:49:48 crc kubenswrapper[4833]: I0217 13:49:48.246737 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 17 13:49:48 crc kubenswrapper[4833]: I0217 13:49:48.298738 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 17 13:49:48 crc kubenswrapper[4833]: I0217 13:49:48.353508 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 17 13:49:48 crc kubenswrapper[4833]: I0217 13:49:48.424012 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 17 13:49:48 crc kubenswrapper[4833]: I0217 13:49:48.546920 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 17 13:49:48 crc kubenswrapper[4833]: I0217 13:49:48.570879 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 17 13:49:48 crc kubenswrapper[4833]: I0217 13:49:48.570992 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 17 13:49:48 crc kubenswrapper[4833]: I0217 13:49:48.603540 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 17 13:49:48 crc kubenswrapper[4833]: I0217 13:49:48.622161 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 17 13:49:48 crc kubenswrapper[4833]: I0217 13:49:48.671602 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 17 13:49:48 crc kubenswrapper[4833]: I0217 13:49:48.708960 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 17 13:49:48 crc kubenswrapper[4833]: I0217 13:49:48.721538 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 17 13:49:48 crc kubenswrapper[4833]: I0217 13:49:48.730683 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 17 13:49:48 crc kubenswrapper[4833]: I0217 13:49:48.896953 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 17 13:49:49 crc kubenswrapper[4833]: I0217 13:49:49.043717 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 17 13:49:49 crc kubenswrapper[4833]: I0217 13:49:49.054642 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 17 13:49:49 crc kubenswrapper[4833]: I0217 13:49:49.140823 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 17 13:49:49 crc kubenswrapper[4833]: I0217 13:49:49.146615 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 17 13:49:49 crc kubenswrapper[4833]: I0217 13:49:49.191831 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 17 13:49:49 crc kubenswrapper[4833]: I0217 13:49:49.192332 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 17 13:49:49 crc kubenswrapper[4833]: I0217 13:49:49.240796 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 17 13:49:49 crc kubenswrapper[4833]: I0217 13:49:49.364328 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 17 13:49:49 crc kubenswrapper[4833]: I0217 13:49:49.367191 4833 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 17 13:49:49 crc kubenswrapper[4833]: I0217 13:49:49.385185 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 17 13:49:49 crc kubenswrapper[4833]: I0217 13:49:49.441175 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 17 13:49:49 crc kubenswrapper[4833]: I0217 13:49:49.468078 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 17 13:49:49 crc kubenswrapper[4833]: I0217 13:49:49.483744 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 17 13:49:49 crc kubenswrapper[4833]: I0217 13:49:49.490419 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 17 13:49:49 crc kubenswrapper[4833]: I0217 13:49:49.498006 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 17 13:49:49 crc kubenswrapper[4833]: I0217 13:49:49.503744 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 17 13:49:49 crc kubenswrapper[4833]: I0217 13:49:49.887712 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 17 13:49:49 crc kubenswrapper[4833]: I0217 13:49:49.901705 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 17 13:49:49 crc kubenswrapper[4833]: I0217 13:49:49.975360 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 17 13:49:50 crc kubenswrapper[4833]: I0217 13:49:50.015583 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 17 13:49:50 crc kubenswrapper[4833]: I0217 13:49:50.040759 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 17 13:49:50 crc kubenswrapper[4833]: I0217 13:49:50.107131 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 17 13:49:50 crc kubenswrapper[4833]: I0217 13:49:50.120422 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 17 13:49:50 crc kubenswrapper[4833]: I0217 13:49:50.213893 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 17 13:49:50 crc kubenswrapper[4833]: I0217 13:49:50.248779 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 17 13:49:50 crc kubenswrapper[4833]: I0217 13:49:50.252943 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 17 13:49:50 crc kubenswrapper[4833]: I0217 13:49:50.275105 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 17 13:49:50 crc kubenswrapper[4833]: I0217 13:49:50.318845 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 17 13:49:50 crc kubenswrapper[4833]: I0217 13:49:50.320145 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 17 13:49:50 crc kubenswrapper[4833]: I0217 13:49:50.322805 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 17 13:49:50 crc kubenswrapper[4833]: I0217 13:49:50.333893 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 17 13:49:50 crc kubenswrapper[4833]: I0217 13:49:50.453574 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 17 13:49:50 crc kubenswrapper[4833]: I0217 13:49:50.492268 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 17 13:49:50 crc kubenswrapper[4833]: I0217 13:49:50.493176 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 17 13:49:50 crc kubenswrapper[4833]: I0217 13:49:50.494688 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 17 13:49:50 crc kubenswrapper[4833]: I0217 13:49:50.530440 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 17 13:49:50 crc kubenswrapper[4833]: I0217 13:49:50.625799 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 17 13:49:50 crc kubenswrapper[4833]: I0217 13:49:50.660612 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 17 13:49:50 crc kubenswrapper[4833]: I0217 13:49:50.706783 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 17 13:49:50 crc kubenswrapper[4833]: I0217 13:49:50.706813 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 17 13:49:50 crc kubenswrapper[4833]: I0217 13:49:50.852811 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 17 13:49:50 crc kubenswrapper[4833]: I0217 13:49:50.878131 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 17 13:49:50 crc kubenswrapper[4833]: I0217 13:49:50.981064 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 17 13:49:50 crc kubenswrapper[4833]: I0217 13:49:50.996420 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 17 13:49:50 crc kubenswrapper[4833]: I0217 13:49:50.999814 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 17 13:49:51 crc kubenswrapper[4833]: I0217 13:49:51.031812 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 17 13:49:51 crc kubenswrapper[4833]: I0217 13:49:51.054547 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 17 13:49:51 crc kubenswrapper[4833]: I0217 13:49:51.115533 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 17 13:49:51 crc kubenswrapper[4833]: I0217 13:49:51.156989 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 17 13:49:51 crc kubenswrapper[4833]: I0217 13:49:51.177742 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 17 13:49:51 crc kubenswrapper[4833]: I0217 13:49:51.198926 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 17 13:49:51 crc kubenswrapper[4833]: I0217 13:49:51.230483 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 17 13:49:51 crc kubenswrapper[4833]: I0217 13:49:51.278919 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 17 13:49:51 crc kubenswrapper[4833]: I0217 13:49:51.322750 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 17 13:49:51 crc kubenswrapper[4833]: I0217 13:49:51.391626 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 17 13:49:51 crc kubenswrapper[4833]: I0217 13:49:51.431347 4833 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 17 13:49:51 crc kubenswrapper[4833]: I0217 13:49:51.467847 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 17 13:49:51 crc kubenswrapper[4833]: I0217 13:49:51.468337 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 17 13:49:51 crc kubenswrapper[4833]: I0217 13:49:51.490008 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 17 13:49:51 crc kubenswrapper[4833]: I0217 13:49:51.500795 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 17 13:49:51 crc kubenswrapper[4833]: I0217 13:49:51.521524 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 17 13:49:51 crc kubenswrapper[4833]: I0217 13:49:51.654688 4833 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 17 13:49:51 crc kubenswrapper[4833]: I0217 13:49:51.660706 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 17 13:49:51 crc kubenswrapper[4833]: I0217 13:49:51.663870 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 17 13:49:51 crc kubenswrapper[4833]: I0217 13:49:51.690813 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 17 13:49:51 crc kubenswrapper[4833]: I0217 13:49:51.704027 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 17 13:49:51 crc kubenswrapper[4833]: I0217 13:49:51.704626 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 17 13:49:51 crc kubenswrapper[4833]: I0217 13:49:51.740330 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 17 13:49:51 crc kubenswrapper[4833]: I0217 13:49:51.783593 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 17 13:49:51 crc kubenswrapper[4833]: I0217 13:49:51.866284 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 17 13:49:51 crc kubenswrapper[4833]: I0217 13:49:51.900516 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 17 13:49:51 crc kubenswrapper[4833]: I0217 13:49:51.924738 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 17 13:49:52 crc kubenswrapper[4833]: I0217 13:49:52.337753 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 17 13:49:52 crc kubenswrapper[4833]: I0217 13:49:52.376967 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 17 13:49:52 crc kubenswrapper[4833]: I0217 13:49:52.378650 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 17 13:49:52 crc kubenswrapper[4833]: I0217 13:49:52.498760 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 17 13:49:52 crc kubenswrapper[4833]: I0217 13:49:52.541802 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 17 13:49:52 crc kubenswrapper[4833]: I0217 13:49:52.552813 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 17 13:49:52 crc kubenswrapper[4833]: I0217 13:49:52.656108 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 17 13:49:52 crc kubenswrapper[4833]: I0217 13:49:52.678802 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 17 13:49:52 crc kubenswrapper[4833]: I0217 13:49:52.697713 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 17 13:49:52 crc kubenswrapper[4833]: I0217 13:49:52.737798 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 17 13:49:52 crc kubenswrapper[4833]: I0217 13:49:52.770028 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 17 13:49:52 crc kubenswrapper[4833]: I0217 13:49:52.770850 4833 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 17 13:49:52 crc kubenswrapper[4833]: I0217 13:49:52.771168 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://e5638dcaff688dc06379447e219950f594937d7643ce6947869ba3d1c9c8b3ac" gracePeriod=5 Feb 17 13:49:52 crc kubenswrapper[4833]: I0217 13:49:52.780323 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 17 13:49:52 crc kubenswrapper[4833]: I0217 13:49:52.839440 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 17 13:49:52 crc kubenswrapper[4833]: I0217 13:49:52.909635 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 17 13:49:52 crc kubenswrapper[4833]: I0217 13:49:52.954162 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 17 13:49:53 crc kubenswrapper[4833]: I0217 13:49:53.005612 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 17 13:49:53 crc kubenswrapper[4833]: I0217 13:49:53.047094 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 17 13:49:53 crc kubenswrapper[4833]: I0217 13:49:53.074875 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 17 13:49:53 crc kubenswrapper[4833]: I0217 13:49:53.206814 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 17 13:49:53 crc kubenswrapper[4833]: I0217 13:49:53.250705 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 17 13:49:53 crc kubenswrapper[4833]: I0217 13:49:53.332294 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 17 13:49:53 crc kubenswrapper[4833]: I0217 13:49:53.541408 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 17 13:49:53 crc kubenswrapper[4833]: I0217 13:49:53.545655 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 17 13:49:53 crc kubenswrapper[4833]: I0217 13:49:53.587419 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 17 13:49:53 crc kubenswrapper[4833]: I0217 13:49:53.646568 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 17 13:49:53 crc kubenswrapper[4833]: I0217 13:49:53.737886 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 17 13:49:53 crc kubenswrapper[4833]: I0217 13:49:53.778696 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 17 13:49:53 crc kubenswrapper[4833]: I0217 13:49:53.790462 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 17 13:49:53 crc kubenswrapper[4833]: I0217 13:49:53.857599 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 17 13:49:53 crc kubenswrapper[4833]: I0217 13:49:53.988704 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 17 13:49:53 crc kubenswrapper[4833]: I0217 13:49:53.989597 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 17 13:49:54 crc kubenswrapper[4833]: I0217 13:49:54.014854 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 17 13:49:54 crc kubenswrapper[4833]: I0217 13:49:54.066577 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 17 13:49:54 crc kubenswrapper[4833]: I0217 13:49:54.066773 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 17 13:49:54 crc kubenswrapper[4833]: I0217 13:49:54.201244 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 17 13:49:54 crc kubenswrapper[4833]: I0217 13:49:54.391927 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 17 13:49:54 crc kubenswrapper[4833]: I0217 13:49:54.416113 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 17 13:49:54 crc kubenswrapper[4833]: I0217 13:49:54.454001 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 17 13:49:54 crc kubenswrapper[4833]: I0217 13:49:54.486082 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 17 13:49:54 crc kubenswrapper[4833]: I0217 13:49:54.548574 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 17 13:49:54 crc kubenswrapper[4833]: I0217 13:49:54.687768 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 17 13:49:54 crc kubenswrapper[4833]: I0217 13:49:54.700406 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 17 13:49:54 crc kubenswrapper[4833]: I0217 13:49:54.972930 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 17 13:49:55 crc kubenswrapper[4833]: I0217 13:49:55.326399 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 17 13:49:55 crc kubenswrapper[4833]: I0217 13:49:55.601312 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 17 13:49:55 crc kubenswrapper[4833]: I0217 13:49:55.841228 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 17 13:49:55 crc kubenswrapper[4833]: I0217 13:49:55.903981 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 17 13:49:55 crc kubenswrapper[4833]: I0217 13:49:55.943305 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 17 13:49:56 crc kubenswrapper[4833]: I0217 13:49:56.002291 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 17 13:49:56 crc kubenswrapper[4833]: I0217 13:49:56.006275 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 17 13:49:56 crc kubenswrapper[4833]: I0217 13:49:56.255558 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 17 13:49:56 crc kubenswrapper[4833]: I0217 13:49:56.461195 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 17 13:49:56 crc kubenswrapper[4833]: I0217 13:49:56.578391 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 17 13:49:56 crc kubenswrapper[4833]: I0217 13:49:56.705029 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 17 13:49:56 crc kubenswrapper[4833]: I0217 13:49:56.723117 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 17 13:49:56 crc kubenswrapper[4833]: I0217 13:49:56.969006 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 17 13:49:56 crc kubenswrapper[4833]: I0217 13:49:56.977877 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 17 13:49:57 crc kubenswrapper[4833]: I0217 13:49:57.076348 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 17 13:49:57 crc kubenswrapper[4833]: I0217 13:49:57.592252 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vnkbw"] Feb 17 13:49:57 crc kubenswrapper[4833]: I0217 13:49:57.593461 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vnkbw" podUID="7866e11a-9385-4003-9406-d4012097cbb3" containerName="registry-server" containerID="cri-o://a7ad310c2978ae8527ee6f0a7b07ea9c14d963721633e0b6126eda824d67fad1" gracePeriod=30 Feb 17 13:49:57 crc kubenswrapper[4833]: I0217 13:49:57.598847 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5q7l2"] Feb 17 13:49:57 crc kubenswrapper[4833]: I0217 13:49:57.599356 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5q7l2" podUID="3cf157f5-3e18-491b-a285-a25a7e71b2ff" containerName="registry-server" containerID="cri-o://272ff41451e44eb2ec40da1c9b64ebc61e7435315a24555d9e15e946e61f2023" gracePeriod=30 Feb 17 13:49:57 crc kubenswrapper[4833]: I0217 13:49:57.615806 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6nkfm"] Feb 17 13:49:57 crc kubenswrapper[4833]: I0217 13:49:57.616134 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-6nkfm" podUID="a39a9177-9838-434f-a2e0-c8359ff146fe" containerName="marketplace-operator" containerID="cri-o://0c2ecd6469f780c39aa056f478860c25ca11ce0b43315c08cc2709be069a2396" gracePeriod=30 Feb 17 13:49:57 crc kubenswrapper[4833]: I0217 13:49:57.620657 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s22hg"] Feb 17 13:49:57 crc kubenswrapper[4833]: I0217 13:49:57.621002 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-s22hg" podUID="811bfe90-4b33-4bfb-969f-63d5dbde1b94" containerName="registry-server" containerID="cri-o://73c03caad43fe8904f9819a2ad9753845e265d6a7fbd4c044d53d8a6bbbcc92e" gracePeriod=30 Feb 17 13:49:57 crc kubenswrapper[4833]: I0217 13:49:57.630411 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wgmx8"] Feb 17 13:49:57 crc kubenswrapper[4833]: I0217 13:49:57.630736 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wgmx8" podUID="5fc2132e-3150-4783-a56a-0bd9f33d4c6c" containerName="registry-server" containerID="cri-o://dba5f603c65d2271744e28a1a39c0c96ec73004382e95ecac27662b2056a3f6b" gracePeriod=30 Feb 17 13:49:57 crc kubenswrapper[4833]: I0217 13:49:57.649018 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-d8g96"] Feb 17 13:49:57 crc kubenswrapper[4833]: E0217 13:49:57.649284 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 17 13:49:57 crc kubenswrapper[4833]: I0217 13:49:57.649298 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 17 13:49:57 crc kubenswrapper[4833]: E0217 13:49:57.649313 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42b66026-007c-4217-a549-f108d5c880e5" containerName="installer" Feb 17 13:49:57 crc kubenswrapper[4833]: I0217 13:49:57.649321 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="42b66026-007c-4217-a549-f108d5c880e5" containerName="installer" Feb 17 13:49:57 crc kubenswrapper[4833]: I0217 13:49:57.649425 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 17 13:49:57 crc kubenswrapper[4833]: I0217 13:49:57.649441 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="42b66026-007c-4217-a549-f108d5c880e5" containerName="installer" Feb 17 13:49:57 crc kubenswrapper[4833]: I0217 13:49:57.649906 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-d8g96" Feb 17 13:49:57 crc kubenswrapper[4833]: I0217 13:49:57.663545 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-d8g96"] Feb 17 13:49:57 crc kubenswrapper[4833]: I0217 13:49:57.765087 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzxbr\" (UniqueName: \"kubernetes.io/projected/3a8790e9-2fd7-49cb-a382-d3671a8019a7-kube-api-access-kzxbr\") pod \"marketplace-operator-79b997595-d8g96\" (UID: \"3a8790e9-2fd7-49cb-a382-d3671a8019a7\") " pod="openshift-marketplace/marketplace-operator-79b997595-d8g96" Feb 17 13:49:57 crc kubenswrapper[4833]: I0217 13:49:57.765186 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3a8790e9-2fd7-49cb-a382-d3671a8019a7-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-d8g96\" (UID: \"3a8790e9-2fd7-49cb-a382-d3671a8019a7\") " pod="openshift-marketplace/marketplace-operator-79b997595-d8g96" Feb 17 13:49:57 crc kubenswrapper[4833]: I0217 13:49:57.765244 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3a8790e9-2fd7-49cb-a382-d3671a8019a7-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-d8g96\" (UID: \"3a8790e9-2fd7-49cb-a382-d3671a8019a7\") " pod="openshift-marketplace/marketplace-operator-79b997595-d8g96" Feb 17 13:49:57 crc kubenswrapper[4833]: I0217 13:49:57.866193 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3a8790e9-2fd7-49cb-a382-d3671a8019a7-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-d8g96\" (UID: \"3a8790e9-2fd7-49cb-a382-d3671a8019a7\") " pod="openshift-marketplace/marketplace-operator-79b997595-d8g96" Feb 17 13:49:57 crc kubenswrapper[4833]: I0217 13:49:57.866252 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3a8790e9-2fd7-49cb-a382-d3671a8019a7-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-d8g96\" (UID: \"3a8790e9-2fd7-49cb-a382-d3671a8019a7\") " pod="openshift-marketplace/marketplace-operator-79b997595-d8g96" Feb 17 13:49:57 crc kubenswrapper[4833]: I0217 13:49:57.866276 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzxbr\" (UniqueName: \"kubernetes.io/projected/3a8790e9-2fd7-49cb-a382-d3671a8019a7-kube-api-access-kzxbr\") pod \"marketplace-operator-79b997595-d8g96\" (UID: \"3a8790e9-2fd7-49cb-a382-d3671a8019a7\") " pod="openshift-marketplace/marketplace-operator-79b997595-d8g96" Feb 17 13:49:57 crc kubenswrapper[4833]: I0217 13:49:57.867348 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3a8790e9-2fd7-49cb-a382-d3671a8019a7-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-d8g96\" (UID: \"3a8790e9-2fd7-49cb-a382-d3671a8019a7\") " pod="openshift-marketplace/marketplace-operator-79b997595-d8g96" Feb 17 13:49:57 crc kubenswrapper[4833]: I0217 13:49:57.873578 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3a8790e9-2fd7-49cb-a382-d3671a8019a7-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-d8g96\" (UID: \"3a8790e9-2fd7-49cb-a382-d3671a8019a7\") " pod="openshift-marketplace/marketplace-operator-79b997595-d8g96" Feb 17 13:49:57 crc kubenswrapper[4833]: I0217 13:49:57.886501 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzxbr\" (UniqueName: \"kubernetes.io/projected/3a8790e9-2fd7-49cb-a382-d3671a8019a7-kube-api-access-kzxbr\") pod \"marketplace-operator-79b997595-d8g96\" (UID: \"3a8790e9-2fd7-49cb-a382-d3671a8019a7\") " pod="openshift-marketplace/marketplace-operator-79b997595-d8g96" Feb 17 13:49:57 crc kubenswrapper[4833]: I0217 13:49:57.969194 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-d8g96" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.122418 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.122487 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.126605 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vnkbw" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.138891 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5q7l2" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.142942 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6nkfm" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.157447 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s22hg" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.170509 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bc9xj\" (UniqueName: \"kubernetes.io/projected/a39a9177-9838-434f-a2e0-c8359ff146fe-kube-api-access-bc9xj\") pod \"a39a9177-9838-434f-a2e0-c8359ff146fe\" (UID: \"a39a9177-9838-434f-a2e0-c8359ff146fe\") " Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.170562 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cf157f5-3e18-491b-a285-a25a7e71b2ff-catalog-content\") pod \"3cf157f5-3e18-491b-a285-a25a7e71b2ff\" (UID: \"3cf157f5-3e18-491b-a285-a25a7e71b2ff\") " Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.170604 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.170631 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.170671 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.170696 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a39a9177-9838-434f-a2e0-c8359ff146fe-marketplace-trusted-ca\") pod \"a39a9177-9838-434f-a2e0-c8359ff146fe\" (UID: \"a39a9177-9838-434f-a2e0-c8359ff146fe\") " Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.170721 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vx2q\" (UniqueName: \"kubernetes.io/projected/3cf157f5-3e18-491b-a285-a25a7e71b2ff-kube-api-access-7vx2q\") pod \"3cf157f5-3e18-491b-a285-a25a7e71b2ff\" (UID: \"3cf157f5-3e18-491b-a285-a25a7e71b2ff\") " Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.170759 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.170781 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7866e11a-9385-4003-9406-d4012097cbb3-catalog-content\") pod \"7866e11a-9385-4003-9406-d4012097cbb3\" (UID: \"7866e11a-9385-4003-9406-d4012097cbb3\") " Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.170811 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nh9d5\" (UniqueName: \"kubernetes.io/projected/7866e11a-9385-4003-9406-d4012097cbb3-kube-api-access-nh9d5\") pod \"7866e11a-9385-4003-9406-d4012097cbb3\" (UID: \"7866e11a-9385-4003-9406-d4012097cbb3\") " Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.170836 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7866e11a-9385-4003-9406-d4012097cbb3-utilities\") pod \"7866e11a-9385-4003-9406-d4012097cbb3\" (UID: \"7866e11a-9385-4003-9406-d4012097cbb3\") " Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.170857 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.170881 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cf157f5-3e18-491b-a285-a25a7e71b2ff-utilities\") pod \"3cf157f5-3e18-491b-a285-a25a7e71b2ff\" (UID: \"3cf157f5-3e18-491b-a285-a25a7e71b2ff\") " Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.170924 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a39a9177-9838-434f-a2e0-c8359ff146fe-marketplace-operator-metrics\") pod \"a39a9177-9838-434f-a2e0-c8359ff146fe\" (UID: \"a39a9177-9838-434f-a2e0-c8359ff146fe\") " Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.225698 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a39a9177-9838-434f-a2e0-c8359ff146fe-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "a39a9177-9838-434f-a2e0-c8359ff146fe" (UID: "a39a9177-9838-434f-a2e0-c8359ff146fe"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.225775 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.227621 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.227909 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7866e11a-9385-4003-9406-d4012097cbb3-utilities" (OuterVolumeSpecName: "utilities") pod "7866e11a-9385-4003-9406-d4012097cbb3" (UID: "7866e11a-9385-4003-9406-d4012097cbb3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.228565 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cf157f5-3e18-491b-a285-a25a7e71b2ff-utilities" (OuterVolumeSpecName: "utilities") pod "3cf157f5-3e18-491b-a285-a25a7e71b2ff" (UID: "3cf157f5-3e18-491b-a285-a25a7e71b2ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.228761 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.229201 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cf157f5-3e18-491b-a285-a25a7e71b2ff-kube-api-access-7vx2q" (OuterVolumeSpecName: "kube-api-access-7vx2q") pod "3cf157f5-3e18-491b-a285-a25a7e71b2ff" (UID: "3cf157f5-3e18-491b-a285-a25a7e71b2ff"). InnerVolumeSpecName "kube-api-access-7vx2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.229279 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a39a9177-9838-434f-a2e0-c8359ff146fe-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "a39a9177-9838-434f-a2e0-c8359ff146fe" (UID: "a39a9177-9838-434f-a2e0-c8359ff146fe"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.229318 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.229843 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a39a9177-9838-434f-a2e0-c8359ff146fe-kube-api-access-bc9xj" (OuterVolumeSpecName: "kube-api-access-bc9xj") pod "a39a9177-9838-434f-a2e0-c8359ff146fe" (UID: "a39a9177-9838-434f-a2e0-c8359ff146fe"). InnerVolumeSpecName "kube-api-access-bc9xj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.274175 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7866e11a-9385-4003-9406-d4012097cbb3-kube-api-access-nh9d5" (OuterVolumeSpecName: "kube-api-access-nh9d5") pod "7866e11a-9385-4003-9406-d4012097cbb3" (UID: "7866e11a-9385-4003-9406-d4012097cbb3"). InnerVolumeSpecName "kube-api-access-nh9d5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.275608 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkhc8\" (UniqueName: \"kubernetes.io/projected/811bfe90-4b33-4bfb-969f-63d5dbde1b94-kube-api-access-vkhc8\") pod \"811bfe90-4b33-4bfb-969f-63d5dbde1b94\" (UID: \"811bfe90-4b33-4bfb-969f-63d5dbde1b94\") " Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.275702 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/811bfe90-4b33-4bfb-969f-63d5dbde1b94-utilities\") pod \"811bfe90-4b33-4bfb-969f-63d5dbde1b94\" (UID: \"811bfe90-4b33-4bfb-969f-63d5dbde1b94\") " Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.275748 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/811bfe90-4b33-4bfb-969f-63d5dbde1b94-catalog-content\") pod \"811bfe90-4b33-4bfb-969f-63d5dbde1b94\" (UID: \"811bfe90-4b33-4bfb-969f-63d5dbde1b94\") " Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.276258 4833 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a39a9177-9838-434f-a2e0-c8359ff146fe-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.276271 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vx2q\" (UniqueName: \"kubernetes.io/projected/3cf157f5-3e18-491b-a285-a25a7e71b2ff-kube-api-access-7vx2q\") on node \"crc\" DevicePath \"\"" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.276285 4833 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.276317 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nh9d5\" (UniqueName: \"kubernetes.io/projected/7866e11a-9385-4003-9406-d4012097cbb3-kube-api-access-nh9d5\") on node \"crc\" DevicePath \"\"" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.276326 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7866e11a-9385-4003-9406-d4012097cbb3-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.276337 4833 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.276349 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cf157f5-3e18-491b-a285-a25a7e71b2ff-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.276359 4833 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a39a9177-9838-434f-a2e0-c8359ff146fe-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.276368 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bc9xj\" (UniqueName: \"kubernetes.io/projected/a39a9177-9838-434f-a2e0-c8359ff146fe-kube-api-access-bc9xj\") on node \"crc\" DevicePath \"\"" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.276402 4833 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.276410 4833 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.280305 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wgmx8" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.289131 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.290771 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/811bfe90-4b33-4bfb-969f-63d5dbde1b94-utilities" (OuterVolumeSpecName: "utilities") pod "811bfe90-4b33-4bfb-969f-63d5dbde1b94" (UID: "811bfe90-4b33-4bfb-969f-63d5dbde1b94"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.293200 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/811bfe90-4b33-4bfb-969f-63d5dbde1b94-kube-api-access-vkhc8" (OuterVolumeSpecName: "kube-api-access-vkhc8") pod "811bfe90-4b33-4bfb-969f-63d5dbde1b94" (UID: "811bfe90-4b33-4bfb-969f-63d5dbde1b94"). InnerVolumeSpecName "kube-api-access-vkhc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.307839 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/811bfe90-4b33-4bfb-969f-63d5dbde1b94-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "811bfe90-4b33-4bfb-969f-63d5dbde1b94" (UID: "811bfe90-4b33-4bfb-969f-63d5dbde1b94"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.349342 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cf157f5-3e18-491b-a285-a25a7e71b2ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3cf157f5-3e18-491b-a285-a25a7e71b2ff" (UID: "3cf157f5-3e18-491b-a285-a25a7e71b2ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.351589 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7866e11a-9385-4003-9406-d4012097cbb3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7866e11a-9385-4003-9406-d4012097cbb3" (UID: "7866e11a-9385-4003-9406-d4012097cbb3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.383311 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fc2132e-3150-4783-a56a-0bd9f33d4c6c-catalog-content\") pod \"5fc2132e-3150-4783-a56a-0bd9f33d4c6c\" (UID: \"5fc2132e-3150-4783-a56a-0bd9f33d4c6c\") " Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.383356 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fc2132e-3150-4783-a56a-0bd9f33d4c6c-utilities\") pod \"5fc2132e-3150-4783-a56a-0bd9f33d4c6c\" (UID: \"5fc2132e-3150-4783-a56a-0bd9f33d4c6c\") " Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.383426 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxg8s\" (UniqueName: \"kubernetes.io/projected/5fc2132e-3150-4783-a56a-0bd9f33d4c6c-kube-api-access-qxg8s\") pod \"5fc2132e-3150-4783-a56a-0bd9f33d4c6c\" (UID: \"5fc2132e-3150-4783-a56a-0bd9f33d4c6c\") " Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.383566 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7866e11a-9385-4003-9406-d4012097cbb3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.383577 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/811bfe90-4b33-4bfb-969f-63d5dbde1b94-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.383586 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/811bfe90-4b33-4bfb-969f-63d5dbde1b94-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.383594 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cf157f5-3e18-491b-a285-a25a7e71b2ff-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.383602 4833 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.383611 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkhc8\" (UniqueName: \"kubernetes.io/projected/811bfe90-4b33-4bfb-969f-63d5dbde1b94-kube-api-access-vkhc8\") on node \"crc\" DevicePath \"\"" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.384111 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5fc2132e-3150-4783-a56a-0bd9f33d4c6c-utilities" (OuterVolumeSpecName: "utilities") pod "5fc2132e-3150-4783-a56a-0bd9f33d4c6c" (UID: "5fc2132e-3150-4783-a56a-0bd9f33d4c6c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.386571 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fc2132e-3150-4783-a56a-0bd9f33d4c6c-kube-api-access-qxg8s" (OuterVolumeSpecName: "kube-api-access-qxg8s") pod "5fc2132e-3150-4783-a56a-0bd9f33d4c6c" (UID: "5fc2132e-3150-4783-a56a-0bd9f33d4c6c"). InnerVolumeSpecName "kube-api-access-qxg8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.484934 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxg8s\" (UniqueName: \"kubernetes.io/projected/5fc2132e-3150-4783-a56a-0bd9f33d4c6c-kube-api-access-qxg8s\") on node \"crc\" DevicePath \"\"" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.485302 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fc2132e-3150-4783-a56a-0bd9f33d4c6c-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.502449 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5fc2132e-3150-4783-a56a-0bd9f33d4c6c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5fc2132e-3150-4783-a56a-0bd9f33d4c6c" (UID: "5fc2132e-3150-4783-a56a-0bd9f33d4c6c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.512085 4833 generic.go:334] "Generic (PLEG): container finished" podID="a39a9177-9838-434f-a2e0-c8359ff146fe" containerID="0c2ecd6469f780c39aa056f478860c25ca11ce0b43315c08cc2709be069a2396" exitCode=0 Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.512202 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6nkfm" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.512187 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6nkfm" event={"ID":"a39a9177-9838-434f-a2e0-c8359ff146fe","Type":"ContainerDied","Data":"0c2ecd6469f780c39aa056f478860c25ca11ce0b43315c08cc2709be069a2396"} Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.512545 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6nkfm" event={"ID":"a39a9177-9838-434f-a2e0-c8359ff146fe","Type":"ContainerDied","Data":"26b13cba661e24bdb850b0646270321882bd5fe2bfba64029cb56f9bd4e69075"} Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.512577 4833 scope.go:117] "RemoveContainer" containerID="0c2ecd6469f780c39aa056f478860c25ca11ce0b43315c08cc2709be069a2396" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.516700 4833 generic.go:334] "Generic (PLEG): container finished" podID="3cf157f5-3e18-491b-a285-a25a7e71b2ff" containerID="272ff41451e44eb2ec40da1c9b64ebc61e7435315a24555d9e15e946e61f2023" exitCode=0 Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.516777 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5q7l2" event={"ID":"3cf157f5-3e18-491b-a285-a25a7e71b2ff","Type":"ContainerDied","Data":"272ff41451e44eb2ec40da1c9b64ebc61e7435315a24555d9e15e946e61f2023"} Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.516809 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5q7l2" event={"ID":"3cf157f5-3e18-491b-a285-a25a7e71b2ff","Type":"ContainerDied","Data":"ae212206a11cce48998e62a98cecc1a46f9b4521c922547883690d93d62ec5e9"} Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.516891 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5q7l2" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.521081 4833 generic.go:334] "Generic (PLEG): container finished" podID="7866e11a-9385-4003-9406-d4012097cbb3" containerID="a7ad310c2978ae8527ee6f0a7b07ea9c14d963721633e0b6126eda824d67fad1" exitCode=0 Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.521205 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vnkbw" event={"ID":"7866e11a-9385-4003-9406-d4012097cbb3","Type":"ContainerDied","Data":"a7ad310c2978ae8527ee6f0a7b07ea9c14d963721633e0b6126eda824d67fad1"} Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.521221 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vnkbw" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.521238 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vnkbw" event={"ID":"7866e11a-9385-4003-9406-d4012097cbb3","Type":"ContainerDied","Data":"c72ce71bf36915bd0a90bb51ccf819702c22194f125545526611a8671c23093e"} Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.523333 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.523375 4833 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="e5638dcaff688dc06379447e219950f594937d7643ce6947869ba3d1c9c8b3ac" exitCode=137 Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.523488 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.527962 4833 generic.go:334] "Generic (PLEG): container finished" podID="5fc2132e-3150-4783-a56a-0bd9f33d4c6c" containerID="dba5f603c65d2271744e28a1a39c0c96ec73004382e95ecac27662b2056a3f6b" exitCode=0 Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.528025 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wgmx8" event={"ID":"5fc2132e-3150-4783-a56a-0bd9f33d4c6c","Type":"ContainerDied","Data":"dba5f603c65d2271744e28a1a39c0c96ec73004382e95ecac27662b2056a3f6b"} Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.528056 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wgmx8" event={"ID":"5fc2132e-3150-4783-a56a-0bd9f33d4c6c","Type":"ContainerDied","Data":"03d7882f5bfb4779adb60b00c8e7be3e81db47db83f4cdfab9d44a0ea57171a1"} Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.528144 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wgmx8" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.533627 4833 scope.go:117] "RemoveContainer" containerID="0c2ecd6469f780c39aa056f478860c25ca11ce0b43315c08cc2709be069a2396" Feb 17 13:49:58 crc kubenswrapper[4833]: E0217 13:49:58.534282 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c2ecd6469f780c39aa056f478860c25ca11ce0b43315c08cc2709be069a2396\": container with ID starting with 0c2ecd6469f780c39aa056f478860c25ca11ce0b43315c08cc2709be069a2396 not found: ID does not exist" containerID="0c2ecd6469f780c39aa056f478860c25ca11ce0b43315c08cc2709be069a2396" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.534332 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c2ecd6469f780c39aa056f478860c25ca11ce0b43315c08cc2709be069a2396"} err="failed to get container status \"0c2ecd6469f780c39aa056f478860c25ca11ce0b43315c08cc2709be069a2396\": rpc error: code = NotFound desc = could not find container \"0c2ecd6469f780c39aa056f478860c25ca11ce0b43315c08cc2709be069a2396\": container with ID starting with 0c2ecd6469f780c39aa056f478860c25ca11ce0b43315c08cc2709be069a2396 not found: ID does not exist" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.534368 4833 scope.go:117] "RemoveContainer" containerID="272ff41451e44eb2ec40da1c9b64ebc61e7435315a24555d9e15e946e61f2023" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.536246 4833 generic.go:334] "Generic (PLEG): container finished" podID="811bfe90-4b33-4bfb-969f-63d5dbde1b94" containerID="73c03caad43fe8904f9819a2ad9753845e265d6a7fbd4c044d53d8a6bbbcc92e" exitCode=0 Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.536287 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s22hg" event={"ID":"811bfe90-4b33-4bfb-969f-63d5dbde1b94","Type":"ContainerDied","Data":"73c03caad43fe8904f9819a2ad9753845e265d6a7fbd4c044d53d8a6bbbcc92e"} Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.536313 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s22hg" event={"ID":"811bfe90-4b33-4bfb-969f-63d5dbde1b94","Type":"ContainerDied","Data":"2062c6948ddc458e86648c97708d1779bc1229e79c15f7fc28cbb60d8bea22e4"} Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.536356 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s22hg" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.566417 4833 scope.go:117] "RemoveContainer" containerID="057446373a9dbd10533e3447cf7d1eeb76bc69280821b7951ecc7493d3929e18" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.567048 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6nkfm"] Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.576229 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6nkfm"] Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.582821 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5q7l2"] Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.586133 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fc2132e-3150-4783-a56a-0bd9f33d4c6c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.590207 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.599257 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5q7l2"] Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.599490 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-d8g96"] Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.601295 4833 scope.go:117] "RemoveContainer" containerID="9c13c4d8d407f34642fc01677779109f6f22525cbd5eeb1deecbc4a7c537ba46" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.603246 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vnkbw"] Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.606428 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vnkbw"] Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.610319 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wgmx8"] Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.614035 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wgmx8"] Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.620494 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s22hg"] Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.622414 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-s22hg"] Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.627509 4833 scope.go:117] "RemoveContainer" containerID="272ff41451e44eb2ec40da1c9b64ebc61e7435315a24555d9e15e946e61f2023" Feb 17 13:49:58 crc kubenswrapper[4833]: E0217 13:49:58.627901 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"272ff41451e44eb2ec40da1c9b64ebc61e7435315a24555d9e15e946e61f2023\": container with ID starting with 272ff41451e44eb2ec40da1c9b64ebc61e7435315a24555d9e15e946e61f2023 not found: ID does not exist" containerID="272ff41451e44eb2ec40da1c9b64ebc61e7435315a24555d9e15e946e61f2023" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.627954 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"272ff41451e44eb2ec40da1c9b64ebc61e7435315a24555d9e15e946e61f2023"} err="failed to get container status \"272ff41451e44eb2ec40da1c9b64ebc61e7435315a24555d9e15e946e61f2023\": rpc error: code = NotFound desc = could not find container \"272ff41451e44eb2ec40da1c9b64ebc61e7435315a24555d9e15e946e61f2023\": container with ID starting with 272ff41451e44eb2ec40da1c9b64ebc61e7435315a24555d9e15e946e61f2023 not found: ID does not exist" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.628018 4833 scope.go:117] "RemoveContainer" containerID="057446373a9dbd10533e3447cf7d1eeb76bc69280821b7951ecc7493d3929e18" Feb 17 13:49:58 crc kubenswrapper[4833]: E0217 13:49:58.628418 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"057446373a9dbd10533e3447cf7d1eeb76bc69280821b7951ecc7493d3929e18\": container with ID starting with 057446373a9dbd10533e3447cf7d1eeb76bc69280821b7951ecc7493d3929e18 not found: ID does not exist" containerID="057446373a9dbd10533e3447cf7d1eeb76bc69280821b7951ecc7493d3929e18" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.628447 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"057446373a9dbd10533e3447cf7d1eeb76bc69280821b7951ecc7493d3929e18"} err="failed to get container status \"057446373a9dbd10533e3447cf7d1eeb76bc69280821b7951ecc7493d3929e18\": rpc error: code = NotFound desc = could not find container \"057446373a9dbd10533e3447cf7d1eeb76bc69280821b7951ecc7493d3929e18\": container with ID starting with 057446373a9dbd10533e3447cf7d1eeb76bc69280821b7951ecc7493d3929e18 not found: ID does not exist" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.628468 4833 scope.go:117] "RemoveContainer" containerID="9c13c4d8d407f34642fc01677779109f6f22525cbd5eeb1deecbc4a7c537ba46" Feb 17 13:49:58 crc kubenswrapper[4833]: E0217 13:49:58.628700 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c13c4d8d407f34642fc01677779109f6f22525cbd5eeb1deecbc4a7c537ba46\": container with ID starting with 9c13c4d8d407f34642fc01677779109f6f22525cbd5eeb1deecbc4a7c537ba46 not found: ID does not exist" containerID="9c13c4d8d407f34642fc01677779109f6f22525cbd5eeb1deecbc4a7c537ba46" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.628724 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c13c4d8d407f34642fc01677779109f6f22525cbd5eeb1deecbc4a7c537ba46"} err="failed to get container status \"9c13c4d8d407f34642fc01677779109f6f22525cbd5eeb1deecbc4a7c537ba46\": rpc error: code = NotFound desc = could not find container \"9c13c4d8d407f34642fc01677779109f6f22525cbd5eeb1deecbc4a7c537ba46\": container with ID starting with 9c13c4d8d407f34642fc01677779109f6f22525cbd5eeb1deecbc4a7c537ba46 not found: ID does not exist" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.628738 4833 scope.go:117] "RemoveContainer" containerID="a7ad310c2978ae8527ee6f0a7b07ea9c14d963721633e0b6126eda824d67fad1" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.653248 4833 scope.go:117] "RemoveContainer" containerID="20a8001f7eda9c2f3ebdb24479222359d0251498414c3574a5f6d8058129a1f9" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.670178 4833 scope.go:117] "RemoveContainer" containerID="3fb02e365572ee24b585ca11f79f28a7086c694ba390500db43d266c32f4da90" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.684752 4833 scope.go:117] "RemoveContainer" containerID="a7ad310c2978ae8527ee6f0a7b07ea9c14d963721633e0b6126eda824d67fad1" Feb 17 13:49:58 crc kubenswrapper[4833]: E0217 13:49:58.685235 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7ad310c2978ae8527ee6f0a7b07ea9c14d963721633e0b6126eda824d67fad1\": container with ID starting with a7ad310c2978ae8527ee6f0a7b07ea9c14d963721633e0b6126eda824d67fad1 not found: ID does not exist" containerID="a7ad310c2978ae8527ee6f0a7b07ea9c14d963721633e0b6126eda824d67fad1" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.685274 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7ad310c2978ae8527ee6f0a7b07ea9c14d963721633e0b6126eda824d67fad1"} err="failed to get container status \"a7ad310c2978ae8527ee6f0a7b07ea9c14d963721633e0b6126eda824d67fad1\": rpc error: code = NotFound desc = could not find container \"a7ad310c2978ae8527ee6f0a7b07ea9c14d963721633e0b6126eda824d67fad1\": container with ID starting with a7ad310c2978ae8527ee6f0a7b07ea9c14d963721633e0b6126eda824d67fad1 not found: ID does not exist" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.685300 4833 scope.go:117] "RemoveContainer" containerID="20a8001f7eda9c2f3ebdb24479222359d0251498414c3574a5f6d8058129a1f9" Feb 17 13:49:58 crc kubenswrapper[4833]: E0217 13:49:58.685616 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20a8001f7eda9c2f3ebdb24479222359d0251498414c3574a5f6d8058129a1f9\": container with ID starting with 20a8001f7eda9c2f3ebdb24479222359d0251498414c3574a5f6d8058129a1f9 not found: ID does not exist" containerID="20a8001f7eda9c2f3ebdb24479222359d0251498414c3574a5f6d8058129a1f9" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.686189 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20a8001f7eda9c2f3ebdb24479222359d0251498414c3574a5f6d8058129a1f9"} err="failed to get container status \"20a8001f7eda9c2f3ebdb24479222359d0251498414c3574a5f6d8058129a1f9\": rpc error: code = NotFound desc = could not find container \"20a8001f7eda9c2f3ebdb24479222359d0251498414c3574a5f6d8058129a1f9\": container with ID starting with 20a8001f7eda9c2f3ebdb24479222359d0251498414c3574a5f6d8058129a1f9 not found: ID does not exist" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.690177 4833 scope.go:117] "RemoveContainer" containerID="3fb02e365572ee24b585ca11f79f28a7086c694ba390500db43d266c32f4da90" Feb 17 13:49:58 crc kubenswrapper[4833]: E0217 13:49:58.690573 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fb02e365572ee24b585ca11f79f28a7086c694ba390500db43d266c32f4da90\": container with ID starting with 3fb02e365572ee24b585ca11f79f28a7086c694ba390500db43d266c32f4da90 not found: ID does not exist" containerID="3fb02e365572ee24b585ca11f79f28a7086c694ba390500db43d266c32f4da90" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.690607 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fb02e365572ee24b585ca11f79f28a7086c694ba390500db43d266c32f4da90"} err="failed to get container status \"3fb02e365572ee24b585ca11f79f28a7086c694ba390500db43d266c32f4da90\": rpc error: code = NotFound desc = could not find container \"3fb02e365572ee24b585ca11f79f28a7086c694ba390500db43d266c32f4da90\": container with ID starting with 3fb02e365572ee24b585ca11f79f28a7086c694ba390500db43d266c32f4da90 not found: ID does not exist" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.690635 4833 scope.go:117] "RemoveContainer" containerID="e5638dcaff688dc06379447e219950f594937d7643ce6947869ba3d1c9c8b3ac" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.717393 4833 scope.go:117] "RemoveContainer" containerID="e5638dcaff688dc06379447e219950f594937d7643ce6947869ba3d1c9c8b3ac" Feb 17 13:49:58 crc kubenswrapper[4833]: E0217 13:49:58.717966 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5638dcaff688dc06379447e219950f594937d7643ce6947869ba3d1c9c8b3ac\": container with ID starting with e5638dcaff688dc06379447e219950f594937d7643ce6947869ba3d1c9c8b3ac not found: ID does not exist" containerID="e5638dcaff688dc06379447e219950f594937d7643ce6947869ba3d1c9c8b3ac" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.717996 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5638dcaff688dc06379447e219950f594937d7643ce6947869ba3d1c9c8b3ac"} err="failed to get container status \"e5638dcaff688dc06379447e219950f594937d7643ce6947869ba3d1c9c8b3ac\": rpc error: code = NotFound desc = could not find container \"e5638dcaff688dc06379447e219950f594937d7643ce6947869ba3d1c9c8b3ac\": container with ID starting with e5638dcaff688dc06379447e219950f594937d7643ce6947869ba3d1c9c8b3ac not found: ID does not exist" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.718018 4833 scope.go:117] "RemoveContainer" containerID="dba5f603c65d2271744e28a1a39c0c96ec73004382e95ecac27662b2056a3f6b" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.772221 4833 scope.go:117] "RemoveContainer" containerID="465919cbe0a7f71befb6f440356354424973b79733b1feb1e7f19474624e7dea" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.786410 4833 scope.go:117] "RemoveContainer" containerID="49d2206085a8dc072ac1485fc650c5998f664712c2ea8148b3930b83ae095f25" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.800866 4833 scope.go:117] "RemoveContainer" containerID="dba5f603c65d2271744e28a1a39c0c96ec73004382e95ecac27662b2056a3f6b" Feb 17 13:49:58 crc kubenswrapper[4833]: E0217 13:49:58.801301 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dba5f603c65d2271744e28a1a39c0c96ec73004382e95ecac27662b2056a3f6b\": container with ID starting with dba5f603c65d2271744e28a1a39c0c96ec73004382e95ecac27662b2056a3f6b not found: ID does not exist" containerID="dba5f603c65d2271744e28a1a39c0c96ec73004382e95ecac27662b2056a3f6b" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.801342 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dba5f603c65d2271744e28a1a39c0c96ec73004382e95ecac27662b2056a3f6b"} err="failed to get container status \"dba5f603c65d2271744e28a1a39c0c96ec73004382e95ecac27662b2056a3f6b\": rpc error: code = NotFound desc = could not find container \"dba5f603c65d2271744e28a1a39c0c96ec73004382e95ecac27662b2056a3f6b\": container with ID starting with dba5f603c65d2271744e28a1a39c0c96ec73004382e95ecac27662b2056a3f6b not found: ID does not exist" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.801368 4833 scope.go:117] "RemoveContainer" containerID="465919cbe0a7f71befb6f440356354424973b79733b1feb1e7f19474624e7dea" Feb 17 13:49:58 crc kubenswrapper[4833]: E0217 13:49:58.801800 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"465919cbe0a7f71befb6f440356354424973b79733b1feb1e7f19474624e7dea\": container with ID starting with 465919cbe0a7f71befb6f440356354424973b79733b1feb1e7f19474624e7dea not found: ID does not exist" containerID="465919cbe0a7f71befb6f440356354424973b79733b1feb1e7f19474624e7dea" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.801841 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"465919cbe0a7f71befb6f440356354424973b79733b1feb1e7f19474624e7dea"} err="failed to get container status \"465919cbe0a7f71befb6f440356354424973b79733b1feb1e7f19474624e7dea\": rpc error: code = NotFound desc = could not find container \"465919cbe0a7f71befb6f440356354424973b79733b1feb1e7f19474624e7dea\": container with ID starting with 465919cbe0a7f71befb6f440356354424973b79733b1feb1e7f19474624e7dea not found: ID does not exist" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.801870 4833 scope.go:117] "RemoveContainer" containerID="49d2206085a8dc072ac1485fc650c5998f664712c2ea8148b3930b83ae095f25" Feb 17 13:49:58 crc kubenswrapper[4833]: E0217 13:49:58.802232 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49d2206085a8dc072ac1485fc650c5998f664712c2ea8148b3930b83ae095f25\": container with ID starting with 49d2206085a8dc072ac1485fc650c5998f664712c2ea8148b3930b83ae095f25 not found: ID does not exist" containerID="49d2206085a8dc072ac1485fc650c5998f664712c2ea8148b3930b83ae095f25" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.802283 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49d2206085a8dc072ac1485fc650c5998f664712c2ea8148b3930b83ae095f25"} err="failed to get container status \"49d2206085a8dc072ac1485fc650c5998f664712c2ea8148b3930b83ae095f25\": rpc error: code = NotFound desc = could not find container \"49d2206085a8dc072ac1485fc650c5998f664712c2ea8148b3930b83ae095f25\": container with ID starting with 49d2206085a8dc072ac1485fc650c5998f664712c2ea8148b3930b83ae095f25 not found: ID does not exist" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.802317 4833 scope.go:117] "RemoveContainer" containerID="73c03caad43fe8904f9819a2ad9753845e265d6a7fbd4c044d53d8a6bbbcc92e" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.821078 4833 scope.go:117] "RemoveContainer" containerID="7a059a56185d576e6d9e7b4b4620c38d75aade037af6a05826270b2f69ff3e07" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.835593 4833 scope.go:117] "RemoveContainer" containerID="c76e804eeccff1033ad19c1f6589acf2fb88d30d34de5ac9db6d48a0ba226ebe" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.849841 4833 scope.go:117] "RemoveContainer" containerID="73c03caad43fe8904f9819a2ad9753845e265d6a7fbd4c044d53d8a6bbbcc92e" Feb 17 13:49:58 crc kubenswrapper[4833]: E0217 13:49:58.851297 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73c03caad43fe8904f9819a2ad9753845e265d6a7fbd4c044d53d8a6bbbcc92e\": container with ID starting with 73c03caad43fe8904f9819a2ad9753845e265d6a7fbd4c044d53d8a6bbbcc92e not found: ID does not exist" containerID="73c03caad43fe8904f9819a2ad9753845e265d6a7fbd4c044d53d8a6bbbcc92e" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.851339 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73c03caad43fe8904f9819a2ad9753845e265d6a7fbd4c044d53d8a6bbbcc92e"} err="failed to get container status \"73c03caad43fe8904f9819a2ad9753845e265d6a7fbd4c044d53d8a6bbbcc92e\": rpc error: code = NotFound desc = could not find container \"73c03caad43fe8904f9819a2ad9753845e265d6a7fbd4c044d53d8a6bbbcc92e\": container with ID starting with 73c03caad43fe8904f9819a2ad9753845e265d6a7fbd4c044d53d8a6bbbcc92e not found: ID does not exist" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.851369 4833 scope.go:117] "RemoveContainer" containerID="7a059a56185d576e6d9e7b4b4620c38d75aade037af6a05826270b2f69ff3e07" Feb 17 13:49:58 crc kubenswrapper[4833]: E0217 13:49:58.851895 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a059a56185d576e6d9e7b4b4620c38d75aade037af6a05826270b2f69ff3e07\": container with ID starting with 7a059a56185d576e6d9e7b4b4620c38d75aade037af6a05826270b2f69ff3e07 not found: ID does not exist" containerID="7a059a56185d576e6d9e7b4b4620c38d75aade037af6a05826270b2f69ff3e07" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.851956 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a059a56185d576e6d9e7b4b4620c38d75aade037af6a05826270b2f69ff3e07"} err="failed to get container status \"7a059a56185d576e6d9e7b4b4620c38d75aade037af6a05826270b2f69ff3e07\": rpc error: code = NotFound desc = could not find container \"7a059a56185d576e6d9e7b4b4620c38d75aade037af6a05826270b2f69ff3e07\": container with ID starting with 7a059a56185d576e6d9e7b4b4620c38d75aade037af6a05826270b2f69ff3e07 not found: ID does not exist" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.851995 4833 scope.go:117] "RemoveContainer" containerID="c76e804eeccff1033ad19c1f6589acf2fb88d30d34de5ac9db6d48a0ba226ebe" Feb 17 13:49:58 crc kubenswrapper[4833]: E0217 13:49:58.853856 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c76e804eeccff1033ad19c1f6589acf2fb88d30d34de5ac9db6d48a0ba226ebe\": container with ID starting with c76e804eeccff1033ad19c1f6589acf2fb88d30d34de5ac9db6d48a0ba226ebe not found: ID does not exist" containerID="c76e804eeccff1033ad19c1f6589acf2fb88d30d34de5ac9db6d48a0ba226ebe" Feb 17 13:49:58 crc kubenswrapper[4833]: I0217 13:49:58.853907 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c76e804eeccff1033ad19c1f6589acf2fb88d30d34de5ac9db6d48a0ba226ebe"} err="failed to get container status \"c76e804eeccff1033ad19c1f6589acf2fb88d30d34de5ac9db6d48a0ba226ebe\": rpc error: code = NotFound desc = could not find container \"c76e804eeccff1033ad19c1f6589acf2fb88d30d34de5ac9db6d48a0ba226ebe\": container with ID starting with c76e804eeccff1033ad19c1f6589acf2fb88d30d34de5ac9db6d48a0ba226ebe not found: ID does not exist" Feb 17 13:49:59 crc kubenswrapper[4833]: I0217 13:49:59.048012 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cf157f5-3e18-491b-a285-a25a7e71b2ff" path="/var/lib/kubelet/pods/3cf157f5-3e18-491b-a285-a25a7e71b2ff/volumes" Feb 17 13:49:59 crc kubenswrapper[4833]: I0217 13:49:59.048875 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fc2132e-3150-4783-a56a-0bd9f33d4c6c" path="/var/lib/kubelet/pods/5fc2132e-3150-4783-a56a-0bd9f33d4c6c/volumes" Feb 17 13:49:59 crc kubenswrapper[4833]: I0217 13:49:59.049501 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7866e11a-9385-4003-9406-d4012097cbb3" path="/var/lib/kubelet/pods/7866e11a-9385-4003-9406-d4012097cbb3/volumes" Feb 17 13:49:59 crc kubenswrapper[4833]: I0217 13:49:59.050473 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="811bfe90-4b33-4bfb-969f-63d5dbde1b94" path="/var/lib/kubelet/pods/811bfe90-4b33-4bfb-969f-63d5dbde1b94/volumes" Feb 17 13:49:59 crc kubenswrapper[4833]: I0217 13:49:59.051107 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a39a9177-9838-434f-a2e0-c8359ff146fe" path="/var/lib/kubelet/pods/a39a9177-9838-434f-a2e0-c8359ff146fe/volumes" Feb 17 13:49:59 crc kubenswrapper[4833]: I0217 13:49:59.051943 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 17 13:49:59 crc kubenswrapper[4833]: I0217 13:49:59.545200 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-d8g96" event={"ID":"3a8790e9-2fd7-49cb-a382-d3671a8019a7","Type":"ContainerStarted","Data":"5d6d0ed48d47dcf36d48faae6a94493f022cda58310e66b4fbb29b804b26f113"} Feb 17 13:49:59 crc kubenswrapper[4833]: I0217 13:49:59.545245 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-d8g96" event={"ID":"3a8790e9-2fd7-49cb-a382-d3671a8019a7","Type":"ContainerStarted","Data":"804de5cf169cdfdc4adea90f5878a501ffa42c8ae7a7936aefabfac591755a3d"} Feb 17 13:49:59 crc kubenswrapper[4833]: I0217 13:49:59.547509 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-d8g96" Feb 17 13:49:59 crc kubenswrapper[4833]: I0217 13:49:59.551320 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-d8g96" Feb 17 13:49:59 crc kubenswrapper[4833]: I0217 13:49:59.567018 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-d8g96" podStartSLOduration=2.566995713 podStartE2EDuration="2.566995713s" podCreationTimestamp="2026-02-17 13:49:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:49:59.561377786 +0000 UTC m=+289.196477249" watchObservedRunningTime="2026-02-17 13:49:59.566995713 +0000 UTC m=+289.202095146" Feb 17 13:50:10 crc kubenswrapper[4833]: I0217 13:50:10.840066 4833 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 17 13:50:14 crc kubenswrapper[4833]: I0217 13:50:14.826394 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 17 13:50:15 crc kubenswrapper[4833]: I0217 13:50:15.338737 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 17 13:50:20 crc kubenswrapper[4833]: I0217 13:50:20.021972 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lvqpz"] Feb 17 13:50:20 crc kubenswrapper[4833]: E0217 13:50:20.022388 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fc2132e-3150-4783-a56a-0bd9f33d4c6c" containerName="extract-content" Feb 17 13:50:20 crc kubenswrapper[4833]: I0217 13:50:20.022400 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fc2132e-3150-4783-a56a-0bd9f33d4c6c" containerName="extract-content" Feb 17 13:50:20 crc kubenswrapper[4833]: E0217 13:50:20.022410 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="811bfe90-4b33-4bfb-969f-63d5dbde1b94" containerName="extract-content" Feb 17 13:50:20 crc kubenswrapper[4833]: I0217 13:50:20.022416 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="811bfe90-4b33-4bfb-969f-63d5dbde1b94" containerName="extract-content" Feb 17 13:50:20 crc kubenswrapper[4833]: E0217 13:50:20.022423 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cf157f5-3e18-491b-a285-a25a7e71b2ff" containerName="registry-server" Feb 17 13:50:20 crc kubenswrapper[4833]: I0217 13:50:20.022429 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cf157f5-3e18-491b-a285-a25a7e71b2ff" containerName="registry-server" Feb 17 13:50:20 crc kubenswrapper[4833]: E0217 13:50:20.022437 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a39a9177-9838-434f-a2e0-c8359ff146fe" containerName="marketplace-operator" Feb 17 13:50:20 crc kubenswrapper[4833]: I0217 13:50:20.022443 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="a39a9177-9838-434f-a2e0-c8359ff146fe" containerName="marketplace-operator" Feb 17 13:50:20 crc kubenswrapper[4833]: E0217 13:50:20.022452 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7866e11a-9385-4003-9406-d4012097cbb3" containerName="extract-content" Feb 17 13:50:20 crc kubenswrapper[4833]: I0217 13:50:20.022458 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="7866e11a-9385-4003-9406-d4012097cbb3" containerName="extract-content" Feb 17 13:50:20 crc kubenswrapper[4833]: E0217 13:50:20.022464 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="811bfe90-4b33-4bfb-969f-63d5dbde1b94" containerName="registry-server" Feb 17 13:50:20 crc kubenswrapper[4833]: I0217 13:50:20.022470 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="811bfe90-4b33-4bfb-969f-63d5dbde1b94" containerName="registry-server" Feb 17 13:50:20 crc kubenswrapper[4833]: E0217 13:50:20.022479 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7866e11a-9385-4003-9406-d4012097cbb3" containerName="registry-server" Feb 17 13:50:20 crc kubenswrapper[4833]: I0217 13:50:20.022485 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="7866e11a-9385-4003-9406-d4012097cbb3" containerName="registry-server" Feb 17 13:50:20 crc kubenswrapper[4833]: E0217 13:50:20.022497 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7866e11a-9385-4003-9406-d4012097cbb3" containerName="extract-utilities" Feb 17 13:50:20 crc kubenswrapper[4833]: I0217 13:50:20.022502 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="7866e11a-9385-4003-9406-d4012097cbb3" containerName="extract-utilities" Feb 17 13:50:20 crc kubenswrapper[4833]: E0217 13:50:20.022509 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="811bfe90-4b33-4bfb-969f-63d5dbde1b94" containerName="extract-utilities" Feb 17 13:50:20 crc kubenswrapper[4833]: I0217 13:50:20.022514 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="811bfe90-4b33-4bfb-969f-63d5dbde1b94" containerName="extract-utilities" Feb 17 13:50:20 crc kubenswrapper[4833]: E0217 13:50:20.022521 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cf157f5-3e18-491b-a285-a25a7e71b2ff" containerName="extract-content" Feb 17 13:50:20 crc kubenswrapper[4833]: I0217 13:50:20.022527 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cf157f5-3e18-491b-a285-a25a7e71b2ff" containerName="extract-content" Feb 17 13:50:20 crc kubenswrapper[4833]: E0217 13:50:20.022536 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fc2132e-3150-4783-a56a-0bd9f33d4c6c" containerName="extract-utilities" Feb 17 13:50:20 crc kubenswrapper[4833]: I0217 13:50:20.022542 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fc2132e-3150-4783-a56a-0bd9f33d4c6c" containerName="extract-utilities" Feb 17 13:50:20 crc kubenswrapper[4833]: E0217 13:50:20.022552 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cf157f5-3e18-491b-a285-a25a7e71b2ff" containerName="extract-utilities" Feb 17 13:50:20 crc kubenswrapper[4833]: I0217 13:50:20.022558 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cf157f5-3e18-491b-a285-a25a7e71b2ff" containerName="extract-utilities" Feb 17 13:50:20 crc kubenswrapper[4833]: E0217 13:50:20.022565 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fc2132e-3150-4783-a56a-0bd9f33d4c6c" containerName="registry-server" Feb 17 13:50:20 crc kubenswrapper[4833]: I0217 13:50:20.022570 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fc2132e-3150-4783-a56a-0bd9f33d4c6c" containerName="registry-server" Feb 17 13:50:20 crc kubenswrapper[4833]: I0217 13:50:20.022655 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fc2132e-3150-4783-a56a-0bd9f33d4c6c" containerName="registry-server" Feb 17 13:50:20 crc kubenswrapper[4833]: I0217 13:50:20.022664 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="a39a9177-9838-434f-a2e0-c8359ff146fe" containerName="marketplace-operator" Feb 17 13:50:20 crc kubenswrapper[4833]: I0217 13:50:20.022673 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cf157f5-3e18-491b-a285-a25a7e71b2ff" containerName="registry-server" Feb 17 13:50:20 crc kubenswrapper[4833]: I0217 13:50:20.022681 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="7866e11a-9385-4003-9406-d4012097cbb3" containerName="registry-server" Feb 17 13:50:20 crc kubenswrapper[4833]: I0217 13:50:20.022687 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="811bfe90-4b33-4bfb-969f-63d5dbde1b94" containerName="registry-server" Feb 17 13:50:20 crc kubenswrapper[4833]: I0217 13:50:20.023337 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lvqpz" Feb 17 13:50:20 crc kubenswrapper[4833]: I0217 13:50:20.024809 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 17 13:50:20 crc kubenswrapper[4833]: I0217 13:50:20.034263 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lvqpz"] Feb 17 13:50:20 crc kubenswrapper[4833]: I0217 13:50:20.159172 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66be447f-7f0e-411d-870d-cf62f09511a3-catalog-content\") pod \"community-operators-lvqpz\" (UID: \"66be447f-7f0e-411d-870d-cf62f09511a3\") " pod="openshift-marketplace/community-operators-lvqpz" Feb 17 13:50:20 crc kubenswrapper[4833]: I0217 13:50:20.159242 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbgkg\" (UniqueName: \"kubernetes.io/projected/66be447f-7f0e-411d-870d-cf62f09511a3-kube-api-access-fbgkg\") pod \"community-operators-lvqpz\" (UID: \"66be447f-7f0e-411d-870d-cf62f09511a3\") " pod="openshift-marketplace/community-operators-lvqpz" Feb 17 13:50:20 crc kubenswrapper[4833]: I0217 13:50:20.159261 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66be447f-7f0e-411d-870d-cf62f09511a3-utilities\") pod \"community-operators-lvqpz\" (UID: \"66be447f-7f0e-411d-870d-cf62f09511a3\") " pod="openshift-marketplace/community-operators-lvqpz" Feb 17 13:50:20 crc kubenswrapper[4833]: I0217 13:50:20.219197 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rh8sb"] Feb 17 13:50:20 crc kubenswrapper[4833]: I0217 13:50:20.220027 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rh8sb" Feb 17 13:50:20 crc kubenswrapper[4833]: I0217 13:50:20.221511 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 17 13:50:20 crc kubenswrapper[4833]: I0217 13:50:20.233694 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rh8sb"] Feb 17 13:50:20 crc kubenswrapper[4833]: I0217 13:50:20.260323 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbgkg\" (UniqueName: \"kubernetes.io/projected/66be447f-7f0e-411d-870d-cf62f09511a3-kube-api-access-fbgkg\") pod \"community-operators-lvqpz\" (UID: \"66be447f-7f0e-411d-870d-cf62f09511a3\") " pod="openshift-marketplace/community-operators-lvqpz" Feb 17 13:50:20 crc kubenswrapper[4833]: I0217 13:50:20.260363 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66be447f-7f0e-411d-870d-cf62f09511a3-utilities\") pod \"community-operators-lvqpz\" (UID: \"66be447f-7f0e-411d-870d-cf62f09511a3\") " pod="openshift-marketplace/community-operators-lvqpz" Feb 17 13:50:20 crc kubenswrapper[4833]: I0217 13:50:20.260412 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66be447f-7f0e-411d-870d-cf62f09511a3-catalog-content\") pod \"community-operators-lvqpz\" (UID: \"66be447f-7f0e-411d-870d-cf62f09511a3\") " pod="openshift-marketplace/community-operators-lvqpz" Feb 17 13:50:20 crc kubenswrapper[4833]: I0217 13:50:20.260871 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66be447f-7f0e-411d-870d-cf62f09511a3-catalog-content\") pod \"community-operators-lvqpz\" (UID: \"66be447f-7f0e-411d-870d-cf62f09511a3\") " pod="openshift-marketplace/community-operators-lvqpz" Feb 17 13:50:20 crc kubenswrapper[4833]: I0217 13:50:20.260975 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66be447f-7f0e-411d-870d-cf62f09511a3-utilities\") pod \"community-operators-lvqpz\" (UID: \"66be447f-7f0e-411d-870d-cf62f09511a3\") " pod="openshift-marketplace/community-operators-lvqpz" Feb 17 13:50:20 crc kubenswrapper[4833]: I0217 13:50:20.278308 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbgkg\" (UniqueName: \"kubernetes.io/projected/66be447f-7f0e-411d-870d-cf62f09511a3-kube-api-access-fbgkg\") pod \"community-operators-lvqpz\" (UID: \"66be447f-7f0e-411d-870d-cf62f09511a3\") " pod="openshift-marketplace/community-operators-lvqpz" Feb 17 13:50:20 crc kubenswrapper[4833]: I0217 13:50:20.340108 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lvqpz" Feb 17 13:50:20 crc kubenswrapper[4833]: I0217 13:50:20.361850 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74000cfa-d982-443b-ad50-a470fafc3b5a-catalog-content\") pod \"certified-operators-rh8sb\" (UID: \"74000cfa-d982-443b-ad50-a470fafc3b5a\") " pod="openshift-marketplace/certified-operators-rh8sb" Feb 17 13:50:20 crc kubenswrapper[4833]: I0217 13:50:20.361908 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74000cfa-d982-443b-ad50-a470fafc3b5a-utilities\") pod \"certified-operators-rh8sb\" (UID: \"74000cfa-d982-443b-ad50-a470fafc3b5a\") " pod="openshift-marketplace/certified-operators-rh8sb" Feb 17 13:50:20 crc kubenswrapper[4833]: I0217 13:50:20.361976 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cp9jx\" (UniqueName: \"kubernetes.io/projected/74000cfa-d982-443b-ad50-a470fafc3b5a-kube-api-access-cp9jx\") pod \"certified-operators-rh8sb\" (UID: \"74000cfa-d982-443b-ad50-a470fafc3b5a\") " pod="openshift-marketplace/certified-operators-rh8sb" Feb 17 13:50:20 crc kubenswrapper[4833]: I0217 13:50:20.462715 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cp9jx\" (UniqueName: \"kubernetes.io/projected/74000cfa-d982-443b-ad50-a470fafc3b5a-kube-api-access-cp9jx\") pod \"certified-operators-rh8sb\" (UID: \"74000cfa-d982-443b-ad50-a470fafc3b5a\") " pod="openshift-marketplace/certified-operators-rh8sb" Feb 17 13:50:20 crc kubenswrapper[4833]: I0217 13:50:20.464002 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74000cfa-d982-443b-ad50-a470fafc3b5a-catalog-content\") pod \"certified-operators-rh8sb\" (UID: \"74000cfa-d982-443b-ad50-a470fafc3b5a\") " pod="openshift-marketplace/certified-operators-rh8sb" Feb 17 13:50:20 crc kubenswrapper[4833]: I0217 13:50:20.464032 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74000cfa-d982-443b-ad50-a470fafc3b5a-utilities\") pod \"certified-operators-rh8sb\" (UID: \"74000cfa-d982-443b-ad50-a470fafc3b5a\") " pod="openshift-marketplace/certified-operators-rh8sb" Feb 17 13:50:20 crc kubenswrapper[4833]: I0217 13:50:20.465027 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74000cfa-d982-443b-ad50-a470fafc3b5a-utilities\") pod \"certified-operators-rh8sb\" (UID: \"74000cfa-d982-443b-ad50-a470fafc3b5a\") " pod="openshift-marketplace/certified-operators-rh8sb" Feb 17 13:50:20 crc kubenswrapper[4833]: I0217 13:50:20.465132 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74000cfa-d982-443b-ad50-a470fafc3b5a-catalog-content\") pod \"certified-operators-rh8sb\" (UID: \"74000cfa-d982-443b-ad50-a470fafc3b5a\") " pod="openshift-marketplace/certified-operators-rh8sb" Feb 17 13:50:20 crc kubenswrapper[4833]: I0217 13:50:20.485769 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cp9jx\" (UniqueName: \"kubernetes.io/projected/74000cfa-d982-443b-ad50-a470fafc3b5a-kube-api-access-cp9jx\") pod \"certified-operators-rh8sb\" (UID: \"74000cfa-d982-443b-ad50-a470fafc3b5a\") " pod="openshift-marketplace/certified-operators-rh8sb" Feb 17 13:50:20 crc kubenswrapper[4833]: I0217 13:50:20.538217 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rh8sb" Feb 17 13:50:20 crc kubenswrapper[4833]: I0217 13:50:20.733377 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lvqpz"] Feb 17 13:50:20 crc kubenswrapper[4833]: I0217 13:50:20.928293 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rh8sb"] Feb 17 13:50:20 crc kubenswrapper[4833]: W0217 13:50:20.958555 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod74000cfa_d982_443b_ad50_a470fafc3b5a.slice/crio-3ab2bed9921ddb2a11ed36a23a0866d7964d40cc1f02fe161df46365301c5250 WatchSource:0}: Error finding container 3ab2bed9921ddb2a11ed36a23a0866d7964d40cc1f02fe161df46365301c5250: Status 404 returned error can't find the container with id 3ab2bed9921ddb2a11ed36a23a0866d7964d40cc1f02fe161df46365301c5250 Feb 17 13:50:21 crc kubenswrapper[4833]: I0217 13:50:21.672296 4833 generic.go:334] "Generic (PLEG): container finished" podID="66be447f-7f0e-411d-870d-cf62f09511a3" containerID="daa228ad21225252878531ecac84f1f3eabf911622f170516c81ee1ab71d2782" exitCode=0 Feb 17 13:50:21 crc kubenswrapper[4833]: I0217 13:50:21.672372 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lvqpz" event={"ID":"66be447f-7f0e-411d-870d-cf62f09511a3","Type":"ContainerDied","Data":"daa228ad21225252878531ecac84f1f3eabf911622f170516c81ee1ab71d2782"} Feb 17 13:50:21 crc kubenswrapper[4833]: I0217 13:50:21.672700 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lvqpz" event={"ID":"66be447f-7f0e-411d-870d-cf62f09511a3","Type":"ContainerStarted","Data":"1c0179241fb8ef36e7429ee8ba12b09033dfb6c1cacf89a71d0ecd1288c2bc70"} Feb 17 13:50:21 crc kubenswrapper[4833]: I0217 13:50:21.675642 4833 generic.go:334] "Generic (PLEG): container finished" podID="74000cfa-d982-443b-ad50-a470fafc3b5a" containerID="470ac5a00bf946522084f970fe5ed33c5857c6d1df1192af5ff6800a073627ac" exitCode=0 Feb 17 13:50:21 crc kubenswrapper[4833]: I0217 13:50:21.675716 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rh8sb" event={"ID":"74000cfa-d982-443b-ad50-a470fafc3b5a","Type":"ContainerDied","Data":"470ac5a00bf946522084f970fe5ed33c5857c6d1df1192af5ff6800a073627ac"} Feb 17 13:50:21 crc kubenswrapper[4833]: I0217 13:50:21.675765 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rh8sb" event={"ID":"74000cfa-d982-443b-ad50-a470fafc3b5a","Type":"ContainerStarted","Data":"3ab2bed9921ddb2a11ed36a23a0866d7964d40cc1f02fe161df46365301c5250"} Feb 17 13:50:22 crc kubenswrapper[4833]: I0217 13:50:22.620165 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-m94q7"] Feb 17 13:50:22 crc kubenswrapper[4833]: I0217 13:50:22.621060 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m94q7" Feb 17 13:50:22 crc kubenswrapper[4833]: I0217 13:50:22.623268 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 17 13:50:22 crc kubenswrapper[4833]: I0217 13:50:22.633487 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m94q7"] Feb 17 13:50:22 crc kubenswrapper[4833]: I0217 13:50:22.703704 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75ec30f8-b6b9-4140-a1ac-f4a421ebe3ce-catalog-content\") pod \"redhat-marketplace-m94q7\" (UID: \"75ec30f8-b6b9-4140-a1ac-f4a421ebe3ce\") " pod="openshift-marketplace/redhat-marketplace-m94q7" Feb 17 13:50:22 crc kubenswrapper[4833]: I0217 13:50:22.703976 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ndhd\" (UniqueName: \"kubernetes.io/projected/75ec30f8-b6b9-4140-a1ac-f4a421ebe3ce-kube-api-access-2ndhd\") pod \"redhat-marketplace-m94q7\" (UID: \"75ec30f8-b6b9-4140-a1ac-f4a421ebe3ce\") " pod="openshift-marketplace/redhat-marketplace-m94q7" Feb 17 13:50:22 crc kubenswrapper[4833]: I0217 13:50:22.704029 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75ec30f8-b6b9-4140-a1ac-f4a421ebe3ce-utilities\") pod \"redhat-marketplace-m94q7\" (UID: \"75ec30f8-b6b9-4140-a1ac-f4a421ebe3ce\") " pod="openshift-marketplace/redhat-marketplace-m94q7" Feb 17 13:50:22 crc kubenswrapper[4833]: I0217 13:50:22.805484 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75ec30f8-b6b9-4140-a1ac-f4a421ebe3ce-catalog-content\") pod \"redhat-marketplace-m94q7\" (UID: \"75ec30f8-b6b9-4140-a1ac-f4a421ebe3ce\") " pod="openshift-marketplace/redhat-marketplace-m94q7" Feb 17 13:50:22 crc kubenswrapper[4833]: I0217 13:50:22.805583 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ndhd\" (UniqueName: \"kubernetes.io/projected/75ec30f8-b6b9-4140-a1ac-f4a421ebe3ce-kube-api-access-2ndhd\") pod \"redhat-marketplace-m94q7\" (UID: \"75ec30f8-b6b9-4140-a1ac-f4a421ebe3ce\") " pod="openshift-marketplace/redhat-marketplace-m94q7" Feb 17 13:50:22 crc kubenswrapper[4833]: I0217 13:50:22.805684 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75ec30f8-b6b9-4140-a1ac-f4a421ebe3ce-utilities\") pod \"redhat-marketplace-m94q7\" (UID: \"75ec30f8-b6b9-4140-a1ac-f4a421ebe3ce\") " pod="openshift-marketplace/redhat-marketplace-m94q7" Feb 17 13:50:22 crc kubenswrapper[4833]: I0217 13:50:22.806785 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75ec30f8-b6b9-4140-a1ac-f4a421ebe3ce-catalog-content\") pod \"redhat-marketplace-m94q7\" (UID: \"75ec30f8-b6b9-4140-a1ac-f4a421ebe3ce\") " pod="openshift-marketplace/redhat-marketplace-m94q7" Feb 17 13:50:22 crc kubenswrapper[4833]: I0217 13:50:22.807437 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75ec30f8-b6b9-4140-a1ac-f4a421ebe3ce-utilities\") pod \"redhat-marketplace-m94q7\" (UID: \"75ec30f8-b6b9-4140-a1ac-f4a421ebe3ce\") " pod="openshift-marketplace/redhat-marketplace-m94q7" Feb 17 13:50:22 crc kubenswrapper[4833]: I0217 13:50:22.828949 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ndhd\" (UniqueName: \"kubernetes.io/projected/75ec30f8-b6b9-4140-a1ac-f4a421ebe3ce-kube-api-access-2ndhd\") pod \"redhat-marketplace-m94q7\" (UID: \"75ec30f8-b6b9-4140-a1ac-f4a421ebe3ce\") " pod="openshift-marketplace/redhat-marketplace-m94q7" Feb 17 13:50:22 crc kubenswrapper[4833]: I0217 13:50:22.833302 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xbcwj"] Feb 17 13:50:22 crc kubenswrapper[4833]: I0217 13:50:22.834213 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xbcwj" Feb 17 13:50:22 crc kubenswrapper[4833]: I0217 13:50:22.836392 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 17 13:50:22 crc kubenswrapper[4833]: I0217 13:50:22.852215 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xbcwj"] Feb 17 13:50:22 crc kubenswrapper[4833]: I0217 13:50:22.907395 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9m2bs\" (UniqueName: \"kubernetes.io/projected/2b38215d-3854-4f9b-8310-974daedbd313-kube-api-access-9m2bs\") pod \"redhat-operators-xbcwj\" (UID: \"2b38215d-3854-4f9b-8310-974daedbd313\") " pod="openshift-marketplace/redhat-operators-xbcwj" Feb 17 13:50:22 crc kubenswrapper[4833]: I0217 13:50:22.907530 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b38215d-3854-4f9b-8310-974daedbd313-utilities\") pod \"redhat-operators-xbcwj\" (UID: \"2b38215d-3854-4f9b-8310-974daedbd313\") " pod="openshift-marketplace/redhat-operators-xbcwj" Feb 17 13:50:22 crc kubenswrapper[4833]: I0217 13:50:22.907561 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b38215d-3854-4f9b-8310-974daedbd313-catalog-content\") pod \"redhat-operators-xbcwj\" (UID: \"2b38215d-3854-4f9b-8310-974daedbd313\") " pod="openshift-marketplace/redhat-operators-xbcwj" Feb 17 13:50:22 crc kubenswrapper[4833]: I0217 13:50:22.967109 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m94q7" Feb 17 13:50:23 crc kubenswrapper[4833]: I0217 13:50:23.008662 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b38215d-3854-4f9b-8310-974daedbd313-utilities\") pod \"redhat-operators-xbcwj\" (UID: \"2b38215d-3854-4f9b-8310-974daedbd313\") " pod="openshift-marketplace/redhat-operators-xbcwj" Feb 17 13:50:23 crc kubenswrapper[4833]: I0217 13:50:23.008720 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b38215d-3854-4f9b-8310-974daedbd313-catalog-content\") pod \"redhat-operators-xbcwj\" (UID: \"2b38215d-3854-4f9b-8310-974daedbd313\") " pod="openshift-marketplace/redhat-operators-xbcwj" Feb 17 13:50:23 crc kubenswrapper[4833]: I0217 13:50:23.008775 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9m2bs\" (UniqueName: \"kubernetes.io/projected/2b38215d-3854-4f9b-8310-974daedbd313-kube-api-access-9m2bs\") pod \"redhat-operators-xbcwj\" (UID: \"2b38215d-3854-4f9b-8310-974daedbd313\") " pod="openshift-marketplace/redhat-operators-xbcwj" Feb 17 13:50:23 crc kubenswrapper[4833]: I0217 13:50:23.009280 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b38215d-3854-4f9b-8310-974daedbd313-utilities\") pod \"redhat-operators-xbcwj\" (UID: \"2b38215d-3854-4f9b-8310-974daedbd313\") " pod="openshift-marketplace/redhat-operators-xbcwj" Feb 17 13:50:23 crc kubenswrapper[4833]: I0217 13:50:23.009368 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b38215d-3854-4f9b-8310-974daedbd313-catalog-content\") pod \"redhat-operators-xbcwj\" (UID: \"2b38215d-3854-4f9b-8310-974daedbd313\") " pod="openshift-marketplace/redhat-operators-xbcwj" Feb 17 13:50:23 crc kubenswrapper[4833]: I0217 13:50:23.024465 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9m2bs\" (UniqueName: \"kubernetes.io/projected/2b38215d-3854-4f9b-8310-974daedbd313-kube-api-access-9m2bs\") pod \"redhat-operators-xbcwj\" (UID: \"2b38215d-3854-4f9b-8310-974daedbd313\") " pod="openshift-marketplace/redhat-operators-xbcwj" Feb 17 13:50:23 crc kubenswrapper[4833]: I0217 13:50:23.216234 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xbcwj" Feb 17 13:50:23 crc kubenswrapper[4833]: I0217 13:50:23.227223 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m94q7"] Feb 17 13:50:23 crc kubenswrapper[4833]: W0217 13:50:23.249466 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75ec30f8_b6b9_4140_a1ac_f4a421ebe3ce.slice/crio-487b5c20cc62c9d6505e9419f12075401af635737a551de70f64dc2b780c089a WatchSource:0}: Error finding container 487b5c20cc62c9d6505e9419f12075401af635737a551de70f64dc2b780c089a: Status 404 returned error can't find the container with id 487b5c20cc62c9d6505e9419f12075401af635737a551de70f64dc2b780c089a Feb 17 13:50:23 crc kubenswrapper[4833]: I0217 13:50:23.676175 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xbcwj"] Feb 17 13:50:23 crc kubenswrapper[4833]: I0217 13:50:23.692058 4833 generic.go:334] "Generic (PLEG): container finished" podID="74000cfa-d982-443b-ad50-a470fafc3b5a" containerID="1aeb7e681e9df83f0e732493d95f3bcbe27818dbe52542b4a847be0c32e56db8" exitCode=0 Feb 17 13:50:23 crc kubenswrapper[4833]: I0217 13:50:23.692146 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rh8sb" event={"ID":"74000cfa-d982-443b-ad50-a470fafc3b5a","Type":"ContainerDied","Data":"1aeb7e681e9df83f0e732493d95f3bcbe27818dbe52542b4a847be0c32e56db8"} Feb 17 13:50:23 crc kubenswrapper[4833]: I0217 13:50:23.694610 4833 generic.go:334] "Generic (PLEG): container finished" podID="75ec30f8-b6b9-4140-a1ac-f4a421ebe3ce" containerID="d9e2282bfbcd66696ad33c99e80ed6b72018f95a612372f98515565208f5ddf9" exitCode=0 Feb 17 13:50:23 crc kubenswrapper[4833]: I0217 13:50:23.694685 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m94q7" event={"ID":"75ec30f8-b6b9-4140-a1ac-f4a421ebe3ce","Type":"ContainerDied","Data":"d9e2282bfbcd66696ad33c99e80ed6b72018f95a612372f98515565208f5ddf9"} Feb 17 13:50:23 crc kubenswrapper[4833]: I0217 13:50:23.694709 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m94q7" event={"ID":"75ec30f8-b6b9-4140-a1ac-f4a421ebe3ce","Type":"ContainerStarted","Data":"487b5c20cc62c9d6505e9419f12075401af635737a551de70f64dc2b780c089a"} Feb 17 13:50:23 crc kubenswrapper[4833]: I0217 13:50:23.696747 4833 generic.go:334] "Generic (PLEG): container finished" podID="66be447f-7f0e-411d-870d-cf62f09511a3" containerID="34de565da238474680e20cd016497f3aa4ca1553f8c416f4401c148d52450825" exitCode=0 Feb 17 13:50:23 crc kubenswrapper[4833]: I0217 13:50:23.696772 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lvqpz" event={"ID":"66be447f-7f0e-411d-870d-cf62f09511a3","Type":"ContainerDied","Data":"34de565da238474680e20cd016497f3aa4ca1553f8c416f4401c148d52450825"} Feb 17 13:50:24 crc kubenswrapper[4833]: I0217 13:50:24.705492 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lvqpz" event={"ID":"66be447f-7f0e-411d-870d-cf62f09511a3","Type":"ContainerStarted","Data":"e9b0418df18373fdbc69881cab93b9b31d537284fad3d5daf2fb7704c96f092e"} Feb 17 13:50:24 crc kubenswrapper[4833]: I0217 13:50:24.707913 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rh8sb" event={"ID":"74000cfa-d982-443b-ad50-a470fafc3b5a","Type":"ContainerStarted","Data":"75b5bed65dd44e973c19d4eeaae4178c8ca215544260396a93c3b7d4ac0cf8d7"} Feb 17 13:50:24 crc kubenswrapper[4833]: I0217 13:50:24.710668 4833 generic.go:334] "Generic (PLEG): container finished" podID="75ec30f8-b6b9-4140-a1ac-f4a421ebe3ce" containerID="e1f3674a6a4742c662bc81b53b8ff8186ced59b558e143d0c44d3531f90da45a" exitCode=0 Feb 17 13:50:24 crc kubenswrapper[4833]: I0217 13:50:24.710720 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m94q7" event={"ID":"75ec30f8-b6b9-4140-a1ac-f4a421ebe3ce","Type":"ContainerDied","Data":"e1f3674a6a4742c662bc81b53b8ff8186ced59b558e143d0c44d3531f90da45a"} Feb 17 13:50:24 crc kubenswrapper[4833]: I0217 13:50:24.712582 4833 generic.go:334] "Generic (PLEG): container finished" podID="2b38215d-3854-4f9b-8310-974daedbd313" containerID="f4bf49b7c28d3a967cdd92dbe65e87cd327469c83740b152be4f8fa51c2da71c" exitCode=0 Feb 17 13:50:24 crc kubenswrapper[4833]: I0217 13:50:24.712626 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xbcwj" event={"ID":"2b38215d-3854-4f9b-8310-974daedbd313","Type":"ContainerDied","Data":"f4bf49b7c28d3a967cdd92dbe65e87cd327469c83740b152be4f8fa51c2da71c"} Feb 17 13:50:24 crc kubenswrapper[4833]: I0217 13:50:24.712684 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xbcwj" event={"ID":"2b38215d-3854-4f9b-8310-974daedbd313","Type":"ContainerStarted","Data":"6162e9cf956f0120afa65e17e1fff62c4cbb90cbd418abf8df4acc3fe7b698e5"} Feb 17 13:50:24 crc kubenswrapper[4833]: I0217 13:50:24.731001 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lvqpz" podStartSLOduration=1.997481405 podStartE2EDuration="4.73098336s" podCreationTimestamp="2026-02-17 13:50:20 +0000 UTC" firstStartedPulling="2026-02-17 13:50:21.675953074 +0000 UTC m=+311.311052547" lastFinishedPulling="2026-02-17 13:50:24.409455069 +0000 UTC m=+314.044554502" observedRunningTime="2026-02-17 13:50:24.727875527 +0000 UTC m=+314.362974970" watchObservedRunningTime="2026-02-17 13:50:24.73098336 +0000 UTC m=+314.366082823" Feb 17 13:50:24 crc kubenswrapper[4833]: I0217 13:50:24.763862 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rh8sb" podStartSLOduration=2.322369186 podStartE2EDuration="4.763835248s" podCreationTimestamp="2026-02-17 13:50:20 +0000 UTC" firstStartedPulling="2026-02-17 13:50:21.679007175 +0000 UTC m=+311.314106648" lastFinishedPulling="2026-02-17 13:50:24.120473277 +0000 UTC m=+313.755572710" observedRunningTime="2026-02-17 13:50:24.758592322 +0000 UTC m=+314.393691805" watchObservedRunningTime="2026-02-17 13:50:24.763835248 +0000 UTC m=+314.398934721" Feb 17 13:50:25 crc kubenswrapper[4833]: I0217 13:50:25.718812 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m94q7" event={"ID":"75ec30f8-b6b9-4140-a1ac-f4a421ebe3ce","Type":"ContainerStarted","Data":"d023f6a48446cde94761b066e065ff51b6e9635d78c4ba434c7949a846c45ed6"} Feb 17 13:50:25 crc kubenswrapper[4833]: I0217 13:50:25.721879 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xbcwj" event={"ID":"2b38215d-3854-4f9b-8310-974daedbd313","Type":"ContainerStarted","Data":"96d512ad9d834bf95df596076b2c0893ddd2d94e160e0d21359c8cec6cd3df28"} Feb 17 13:50:25 crc kubenswrapper[4833]: I0217 13:50:25.766938 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-m94q7" podStartSLOduration=2.356773972 podStartE2EDuration="3.766917346s" podCreationTimestamp="2026-02-17 13:50:22 +0000 UTC" firstStartedPulling="2026-02-17 13:50:23.699678042 +0000 UTC m=+313.334777465" lastFinishedPulling="2026-02-17 13:50:25.109821396 +0000 UTC m=+314.744920839" observedRunningTime="2026-02-17 13:50:25.744477468 +0000 UTC m=+315.379576901" watchObservedRunningTime="2026-02-17 13:50:25.766917346 +0000 UTC m=+315.402016799" Feb 17 13:50:26 crc kubenswrapper[4833]: I0217 13:50:26.728259 4833 generic.go:334] "Generic (PLEG): container finished" podID="2b38215d-3854-4f9b-8310-974daedbd313" containerID="96d512ad9d834bf95df596076b2c0893ddd2d94e160e0d21359c8cec6cd3df28" exitCode=0 Feb 17 13:50:26 crc kubenswrapper[4833]: I0217 13:50:26.728337 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xbcwj" event={"ID":"2b38215d-3854-4f9b-8310-974daedbd313","Type":"ContainerDied","Data":"96d512ad9d834bf95df596076b2c0893ddd2d94e160e0d21359c8cec6cd3df28"} Feb 17 13:50:27 crc kubenswrapper[4833]: I0217 13:50:27.737426 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xbcwj" event={"ID":"2b38215d-3854-4f9b-8310-974daedbd313","Type":"ContainerStarted","Data":"c85bc696b782e02080ab93c63478a2ae5994aa2ce32e120006d17ae53df6beb7"} Feb 17 13:50:27 crc kubenswrapper[4833]: I0217 13:50:27.761930 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xbcwj" podStartSLOduration=3.372915548 podStartE2EDuration="5.761906629s" podCreationTimestamp="2026-02-17 13:50:22 +0000 UTC" firstStartedPulling="2026-02-17 13:50:24.71419357 +0000 UTC m=+314.349293023" lastFinishedPulling="2026-02-17 13:50:27.103184671 +0000 UTC m=+316.738284104" observedRunningTime="2026-02-17 13:50:27.758801536 +0000 UTC m=+317.393900979" watchObservedRunningTime="2026-02-17 13:50:27.761906629 +0000 UTC m=+317.397006092" Feb 17 13:50:28 crc kubenswrapper[4833]: I0217 13:50:28.106852 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 17 13:50:30 crc kubenswrapper[4833]: I0217 13:50:30.340740 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lvqpz" Feb 17 13:50:30 crc kubenswrapper[4833]: I0217 13:50:30.340790 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lvqpz" Feb 17 13:50:30 crc kubenswrapper[4833]: I0217 13:50:30.384827 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lvqpz" Feb 17 13:50:30 crc kubenswrapper[4833]: I0217 13:50:30.538678 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rh8sb" Feb 17 13:50:30 crc kubenswrapper[4833]: I0217 13:50:30.538733 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rh8sb" Feb 17 13:50:30 crc kubenswrapper[4833]: I0217 13:50:30.579839 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rh8sb" Feb 17 13:50:30 crc kubenswrapper[4833]: I0217 13:50:30.787006 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rh8sb" Feb 17 13:50:30 crc kubenswrapper[4833]: I0217 13:50:30.788341 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lvqpz" Feb 17 13:50:32 crc kubenswrapper[4833]: I0217 13:50:32.967516 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-m94q7" Feb 17 13:50:32 crc kubenswrapper[4833]: I0217 13:50:32.967884 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-m94q7" Feb 17 13:50:33 crc kubenswrapper[4833]: I0217 13:50:33.014839 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-m94q7" Feb 17 13:50:33 crc kubenswrapper[4833]: I0217 13:50:33.217076 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xbcwj" Feb 17 13:50:33 crc kubenswrapper[4833]: I0217 13:50:33.217199 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xbcwj" Feb 17 13:50:33 crc kubenswrapper[4833]: I0217 13:50:33.830176 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-m94q7" Feb 17 13:50:34 crc kubenswrapper[4833]: I0217 13:50:34.001680 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 17 13:50:34 crc kubenswrapper[4833]: I0217 13:50:34.252312 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xbcwj" podUID="2b38215d-3854-4f9b-8310-974daedbd313" containerName="registry-server" probeResult="failure" output=< Feb 17 13:50:34 crc kubenswrapper[4833]: timeout: failed to connect service ":50051" within 1s Feb 17 13:50:34 crc kubenswrapper[4833]: > Feb 17 13:50:38 crc kubenswrapper[4833]: I0217 13:50:38.563063 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-7zvnb"] Feb 17 13:50:38 crc kubenswrapper[4833]: I0217 13:50:38.564427 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-7zvnb" Feb 17 13:50:38 crc kubenswrapper[4833]: I0217 13:50:38.610010 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-7zvnb\" (UID: \"95d6ca2a-43da-4b40-87b6-c7f502395cc6\") " pod="openshift-image-registry/image-registry-66df7c8f76-7zvnb" Feb 17 13:50:38 crc kubenswrapper[4833]: I0217 13:50:38.610071 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/95d6ca2a-43da-4b40-87b6-c7f502395cc6-ca-trust-extracted\") pod \"image-registry-66df7c8f76-7zvnb\" (UID: \"95d6ca2a-43da-4b40-87b6-c7f502395cc6\") " pod="openshift-image-registry/image-registry-66df7c8f76-7zvnb" Feb 17 13:50:38 crc kubenswrapper[4833]: I0217 13:50:38.610108 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/95d6ca2a-43da-4b40-87b6-c7f502395cc6-registry-tls\") pod \"image-registry-66df7c8f76-7zvnb\" (UID: \"95d6ca2a-43da-4b40-87b6-c7f502395cc6\") " pod="openshift-image-registry/image-registry-66df7c8f76-7zvnb" Feb 17 13:50:38 crc kubenswrapper[4833]: I0217 13:50:38.610132 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/95d6ca2a-43da-4b40-87b6-c7f502395cc6-trusted-ca\") pod \"image-registry-66df7c8f76-7zvnb\" (UID: \"95d6ca2a-43da-4b40-87b6-c7f502395cc6\") " pod="openshift-image-registry/image-registry-66df7c8f76-7zvnb" Feb 17 13:50:38 crc kubenswrapper[4833]: I0217 13:50:38.610163 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/95d6ca2a-43da-4b40-87b6-c7f502395cc6-bound-sa-token\") pod \"image-registry-66df7c8f76-7zvnb\" (UID: \"95d6ca2a-43da-4b40-87b6-c7f502395cc6\") " pod="openshift-image-registry/image-registry-66df7c8f76-7zvnb" Feb 17 13:50:38 crc kubenswrapper[4833]: I0217 13:50:38.610182 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjhc9\" (UniqueName: \"kubernetes.io/projected/95d6ca2a-43da-4b40-87b6-c7f502395cc6-kube-api-access-bjhc9\") pod \"image-registry-66df7c8f76-7zvnb\" (UID: \"95d6ca2a-43da-4b40-87b6-c7f502395cc6\") " pod="openshift-image-registry/image-registry-66df7c8f76-7zvnb" Feb 17 13:50:38 crc kubenswrapper[4833]: I0217 13:50:38.610216 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/95d6ca2a-43da-4b40-87b6-c7f502395cc6-registry-certificates\") pod \"image-registry-66df7c8f76-7zvnb\" (UID: \"95d6ca2a-43da-4b40-87b6-c7f502395cc6\") " pod="openshift-image-registry/image-registry-66df7c8f76-7zvnb" Feb 17 13:50:38 crc kubenswrapper[4833]: I0217 13:50:38.610262 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/95d6ca2a-43da-4b40-87b6-c7f502395cc6-installation-pull-secrets\") pod \"image-registry-66df7c8f76-7zvnb\" (UID: \"95d6ca2a-43da-4b40-87b6-c7f502395cc6\") " pod="openshift-image-registry/image-registry-66df7c8f76-7zvnb" Feb 17 13:50:38 crc kubenswrapper[4833]: I0217 13:50:38.645408 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-7zvnb"] Feb 17 13:50:38 crc kubenswrapper[4833]: I0217 13:50:38.684596 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-7zvnb\" (UID: \"95d6ca2a-43da-4b40-87b6-c7f502395cc6\") " pod="openshift-image-registry/image-registry-66df7c8f76-7zvnb" Feb 17 13:50:38 crc kubenswrapper[4833]: I0217 13:50:38.711976 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/95d6ca2a-43da-4b40-87b6-c7f502395cc6-ca-trust-extracted\") pod \"image-registry-66df7c8f76-7zvnb\" (UID: \"95d6ca2a-43da-4b40-87b6-c7f502395cc6\") " pod="openshift-image-registry/image-registry-66df7c8f76-7zvnb" Feb 17 13:50:38 crc kubenswrapper[4833]: I0217 13:50:38.712055 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/95d6ca2a-43da-4b40-87b6-c7f502395cc6-registry-tls\") pod \"image-registry-66df7c8f76-7zvnb\" (UID: \"95d6ca2a-43da-4b40-87b6-c7f502395cc6\") " pod="openshift-image-registry/image-registry-66df7c8f76-7zvnb" Feb 17 13:50:38 crc kubenswrapper[4833]: I0217 13:50:38.712081 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/95d6ca2a-43da-4b40-87b6-c7f502395cc6-trusted-ca\") pod \"image-registry-66df7c8f76-7zvnb\" (UID: \"95d6ca2a-43da-4b40-87b6-c7f502395cc6\") " pod="openshift-image-registry/image-registry-66df7c8f76-7zvnb" Feb 17 13:50:38 crc kubenswrapper[4833]: I0217 13:50:38.712101 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/95d6ca2a-43da-4b40-87b6-c7f502395cc6-bound-sa-token\") pod \"image-registry-66df7c8f76-7zvnb\" (UID: \"95d6ca2a-43da-4b40-87b6-c7f502395cc6\") " pod="openshift-image-registry/image-registry-66df7c8f76-7zvnb" Feb 17 13:50:38 crc kubenswrapper[4833]: I0217 13:50:38.712138 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjhc9\" (UniqueName: \"kubernetes.io/projected/95d6ca2a-43da-4b40-87b6-c7f502395cc6-kube-api-access-bjhc9\") pod \"image-registry-66df7c8f76-7zvnb\" (UID: \"95d6ca2a-43da-4b40-87b6-c7f502395cc6\") " pod="openshift-image-registry/image-registry-66df7c8f76-7zvnb" Feb 17 13:50:38 crc kubenswrapper[4833]: I0217 13:50:38.712160 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/95d6ca2a-43da-4b40-87b6-c7f502395cc6-registry-certificates\") pod \"image-registry-66df7c8f76-7zvnb\" (UID: \"95d6ca2a-43da-4b40-87b6-c7f502395cc6\") " pod="openshift-image-registry/image-registry-66df7c8f76-7zvnb" Feb 17 13:50:38 crc kubenswrapper[4833]: I0217 13:50:38.712187 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/95d6ca2a-43da-4b40-87b6-c7f502395cc6-installation-pull-secrets\") pod \"image-registry-66df7c8f76-7zvnb\" (UID: \"95d6ca2a-43da-4b40-87b6-c7f502395cc6\") " pod="openshift-image-registry/image-registry-66df7c8f76-7zvnb" Feb 17 13:50:38 crc kubenswrapper[4833]: I0217 13:50:38.713541 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/95d6ca2a-43da-4b40-87b6-c7f502395cc6-registry-certificates\") pod \"image-registry-66df7c8f76-7zvnb\" (UID: \"95d6ca2a-43da-4b40-87b6-c7f502395cc6\") " pod="openshift-image-registry/image-registry-66df7c8f76-7zvnb" Feb 17 13:50:38 crc kubenswrapper[4833]: I0217 13:50:38.713764 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/95d6ca2a-43da-4b40-87b6-c7f502395cc6-trusted-ca\") pod \"image-registry-66df7c8f76-7zvnb\" (UID: \"95d6ca2a-43da-4b40-87b6-c7f502395cc6\") " pod="openshift-image-registry/image-registry-66df7c8f76-7zvnb" Feb 17 13:50:38 crc kubenswrapper[4833]: I0217 13:50:38.713853 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/95d6ca2a-43da-4b40-87b6-c7f502395cc6-ca-trust-extracted\") pod \"image-registry-66df7c8f76-7zvnb\" (UID: \"95d6ca2a-43da-4b40-87b6-c7f502395cc6\") " pod="openshift-image-registry/image-registry-66df7c8f76-7zvnb" Feb 17 13:50:38 crc kubenswrapper[4833]: I0217 13:50:38.717741 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/95d6ca2a-43da-4b40-87b6-c7f502395cc6-registry-tls\") pod \"image-registry-66df7c8f76-7zvnb\" (UID: \"95d6ca2a-43da-4b40-87b6-c7f502395cc6\") " pod="openshift-image-registry/image-registry-66df7c8f76-7zvnb" Feb 17 13:50:38 crc kubenswrapper[4833]: I0217 13:50:38.718485 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/95d6ca2a-43da-4b40-87b6-c7f502395cc6-installation-pull-secrets\") pod \"image-registry-66df7c8f76-7zvnb\" (UID: \"95d6ca2a-43da-4b40-87b6-c7f502395cc6\") " pod="openshift-image-registry/image-registry-66df7c8f76-7zvnb" Feb 17 13:50:38 crc kubenswrapper[4833]: I0217 13:50:38.726230 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/95d6ca2a-43da-4b40-87b6-c7f502395cc6-bound-sa-token\") pod \"image-registry-66df7c8f76-7zvnb\" (UID: \"95d6ca2a-43da-4b40-87b6-c7f502395cc6\") " pod="openshift-image-registry/image-registry-66df7c8f76-7zvnb" Feb 17 13:50:38 crc kubenswrapper[4833]: I0217 13:50:38.727764 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjhc9\" (UniqueName: \"kubernetes.io/projected/95d6ca2a-43da-4b40-87b6-c7f502395cc6-kube-api-access-bjhc9\") pod \"image-registry-66df7c8f76-7zvnb\" (UID: \"95d6ca2a-43da-4b40-87b6-c7f502395cc6\") " pod="openshift-image-registry/image-registry-66df7c8f76-7zvnb" Feb 17 13:50:38 crc kubenswrapper[4833]: I0217 13:50:38.881694 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-7zvnb" Feb 17 13:50:39 crc kubenswrapper[4833]: I0217 13:50:39.302439 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-7zvnb"] Feb 17 13:50:39 crc kubenswrapper[4833]: I0217 13:50:39.798177 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-7zvnb" event={"ID":"95d6ca2a-43da-4b40-87b6-c7f502395cc6","Type":"ContainerStarted","Data":"3dccf2c7571216dade1f05965b3a3245e769d240d7bf382946ab1c74aeb99e49"} Feb 17 13:50:39 crc kubenswrapper[4833]: I0217 13:50:39.798452 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-7zvnb" event={"ID":"95d6ca2a-43da-4b40-87b6-c7f502395cc6","Type":"ContainerStarted","Data":"92cc20d41def1432b04602c1a6db93ca2842f263598ccad0f8a8f7ff20c2f74a"} Feb 17 13:50:39 crc kubenswrapper[4833]: I0217 13:50:39.799375 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-7zvnb" Feb 17 13:50:39 crc kubenswrapper[4833]: I0217 13:50:39.816405 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-7zvnb" podStartSLOduration=1.816383333 podStartE2EDuration="1.816383333s" podCreationTimestamp="2026-02-17 13:50:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:50:39.813371479 +0000 UTC m=+329.448470972" watchObservedRunningTime="2026-02-17 13:50:39.816383333 +0000 UTC m=+329.451482796" Feb 17 13:50:43 crc kubenswrapper[4833]: I0217 13:50:43.259360 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xbcwj" Feb 17 13:50:43 crc kubenswrapper[4833]: I0217 13:50:43.301834 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xbcwj" Feb 17 13:50:44 crc kubenswrapper[4833]: I0217 13:50:44.245341 4833 patch_prober.go:28] interesting pod/machine-config-daemon-nmzvl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 13:50:44 crc kubenswrapper[4833]: I0217 13:50:44.245401 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" podUID="f4a1ca83-1919-4f9c-82de-c849cbd50e70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 13:50:58 crc kubenswrapper[4833]: I0217 13:50:58.887578 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-7zvnb" Feb 17 13:50:58 crc kubenswrapper[4833]: I0217 13:50:58.979845 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-2ddlx"] Feb 17 13:51:14 crc kubenswrapper[4833]: I0217 13:51:14.243829 4833 patch_prober.go:28] interesting pod/machine-config-daemon-nmzvl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 13:51:14 crc kubenswrapper[4833]: I0217 13:51:14.244628 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" podUID="f4a1ca83-1919-4f9c-82de-c849cbd50e70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 13:51:24 crc kubenswrapper[4833]: I0217 13:51:24.021666 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" podUID="16569a9d-7677-455d-87c6-7b2fb504b731" containerName="registry" containerID="cri-o://c034575e4a954e33f8591823e7e7a086b15549f9611aeef632874d7d7d66743e" gracePeriod=30 Feb 17 13:51:24 crc kubenswrapper[4833]: I0217 13:51:24.393007 4833 generic.go:334] "Generic (PLEG): container finished" podID="16569a9d-7677-455d-87c6-7b2fb504b731" containerID="c034575e4a954e33f8591823e7e7a086b15549f9611aeef632874d7d7d66743e" exitCode=0 Feb 17 13:51:24 crc kubenswrapper[4833]: I0217 13:51:24.393423 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" event={"ID":"16569a9d-7677-455d-87c6-7b2fb504b731","Type":"ContainerDied","Data":"c034575e4a954e33f8591823e7e7a086b15549f9611aeef632874d7d7d66743e"} Feb 17 13:51:24 crc kubenswrapper[4833]: I0217 13:51:24.393483 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" event={"ID":"16569a9d-7677-455d-87c6-7b2fb504b731","Type":"ContainerDied","Data":"0a902cc5b17aa83f614e84c5fce6a5150af843f83545ba847b41e75818d38ffe"} Feb 17 13:51:24 crc kubenswrapper[4833]: I0217 13:51:24.393499 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a902cc5b17aa83f614e84c5fce6a5150af843f83545ba847b41e75818d38ffe" Feb 17 13:51:24 crc kubenswrapper[4833]: I0217 13:51:24.418360 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" Feb 17 13:51:24 crc kubenswrapper[4833]: I0217 13:51:24.573855 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/16569a9d-7677-455d-87c6-7b2fb504b731-registry-certificates\") pod \"16569a9d-7677-455d-87c6-7b2fb504b731\" (UID: \"16569a9d-7677-455d-87c6-7b2fb504b731\") " Feb 17 13:51:24 crc kubenswrapper[4833]: I0217 13:51:24.573932 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/16569a9d-7677-455d-87c6-7b2fb504b731-trusted-ca\") pod \"16569a9d-7677-455d-87c6-7b2fb504b731\" (UID: \"16569a9d-7677-455d-87c6-7b2fb504b731\") " Feb 17 13:51:24 crc kubenswrapper[4833]: I0217 13:51:24.574137 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/16569a9d-7677-455d-87c6-7b2fb504b731-ca-trust-extracted\") pod \"16569a9d-7677-455d-87c6-7b2fb504b731\" (UID: \"16569a9d-7677-455d-87c6-7b2fb504b731\") " Feb 17 13:51:24 crc kubenswrapper[4833]: I0217 13:51:24.574179 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5xcv\" (UniqueName: \"kubernetes.io/projected/16569a9d-7677-455d-87c6-7b2fb504b731-kube-api-access-v5xcv\") pod \"16569a9d-7677-455d-87c6-7b2fb504b731\" (UID: \"16569a9d-7677-455d-87c6-7b2fb504b731\") " Feb 17 13:51:24 crc kubenswrapper[4833]: I0217 13:51:24.574217 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/16569a9d-7677-455d-87c6-7b2fb504b731-registry-tls\") pod \"16569a9d-7677-455d-87c6-7b2fb504b731\" (UID: \"16569a9d-7677-455d-87c6-7b2fb504b731\") " Feb 17 13:51:24 crc kubenswrapper[4833]: I0217 13:51:24.575827 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16569a9d-7677-455d-87c6-7b2fb504b731-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "16569a9d-7677-455d-87c6-7b2fb504b731" (UID: "16569a9d-7677-455d-87c6-7b2fb504b731"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:51:24 crc kubenswrapper[4833]: I0217 13:51:24.576301 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16569a9d-7677-455d-87c6-7b2fb504b731-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "16569a9d-7677-455d-87c6-7b2fb504b731" (UID: "16569a9d-7677-455d-87c6-7b2fb504b731"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:51:24 crc kubenswrapper[4833]: I0217 13:51:24.576384 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"16569a9d-7677-455d-87c6-7b2fb504b731\" (UID: \"16569a9d-7677-455d-87c6-7b2fb504b731\") " Feb 17 13:51:24 crc kubenswrapper[4833]: I0217 13:51:24.577055 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/16569a9d-7677-455d-87c6-7b2fb504b731-bound-sa-token\") pod \"16569a9d-7677-455d-87c6-7b2fb504b731\" (UID: \"16569a9d-7677-455d-87c6-7b2fb504b731\") " Feb 17 13:51:24 crc kubenswrapper[4833]: I0217 13:51:24.577131 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/16569a9d-7677-455d-87c6-7b2fb504b731-installation-pull-secrets\") pod \"16569a9d-7677-455d-87c6-7b2fb504b731\" (UID: \"16569a9d-7677-455d-87c6-7b2fb504b731\") " Feb 17 13:51:24 crc kubenswrapper[4833]: I0217 13:51:24.580217 4833 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/16569a9d-7677-455d-87c6-7b2fb504b731-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 17 13:51:24 crc kubenswrapper[4833]: I0217 13:51:24.580254 4833 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/16569a9d-7677-455d-87c6-7b2fb504b731-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 13:51:24 crc kubenswrapper[4833]: I0217 13:51:24.582668 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16569a9d-7677-455d-87c6-7b2fb504b731-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "16569a9d-7677-455d-87c6-7b2fb504b731" (UID: "16569a9d-7677-455d-87c6-7b2fb504b731"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:51:24 crc kubenswrapper[4833]: I0217 13:51:24.592476 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16569a9d-7677-455d-87c6-7b2fb504b731-kube-api-access-v5xcv" (OuterVolumeSpecName: "kube-api-access-v5xcv") pod "16569a9d-7677-455d-87c6-7b2fb504b731" (UID: "16569a9d-7677-455d-87c6-7b2fb504b731"). InnerVolumeSpecName "kube-api-access-v5xcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:51:24 crc kubenswrapper[4833]: I0217 13:51:24.596759 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16569a9d-7677-455d-87c6-7b2fb504b731-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "16569a9d-7677-455d-87c6-7b2fb504b731" (UID: "16569a9d-7677-455d-87c6-7b2fb504b731"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:51:24 crc kubenswrapper[4833]: I0217 13:51:24.599696 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16569a9d-7677-455d-87c6-7b2fb504b731-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "16569a9d-7677-455d-87c6-7b2fb504b731" (UID: "16569a9d-7677-455d-87c6-7b2fb504b731"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:51:24 crc kubenswrapper[4833]: I0217 13:51:24.603989 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "16569a9d-7677-455d-87c6-7b2fb504b731" (UID: "16569a9d-7677-455d-87c6-7b2fb504b731"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 13:51:24 crc kubenswrapper[4833]: I0217 13:51:24.615114 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16569a9d-7677-455d-87c6-7b2fb504b731-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "16569a9d-7677-455d-87c6-7b2fb504b731" (UID: "16569a9d-7677-455d-87c6-7b2fb504b731"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:51:24 crc kubenswrapper[4833]: I0217 13:51:24.681734 4833 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/16569a9d-7677-455d-87c6-7b2fb504b731-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 17 13:51:24 crc kubenswrapper[4833]: I0217 13:51:24.681844 4833 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/16569a9d-7677-455d-87c6-7b2fb504b731-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 17 13:51:24 crc kubenswrapper[4833]: I0217 13:51:24.681878 4833 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/16569a9d-7677-455d-87c6-7b2fb504b731-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 17 13:51:24 crc kubenswrapper[4833]: I0217 13:51:24.681889 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5xcv\" (UniqueName: \"kubernetes.io/projected/16569a9d-7677-455d-87c6-7b2fb504b731-kube-api-access-v5xcv\") on node \"crc\" DevicePath \"\"" Feb 17 13:51:24 crc kubenswrapper[4833]: I0217 13:51:24.681900 4833 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/16569a9d-7677-455d-87c6-7b2fb504b731-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 17 13:51:25 crc kubenswrapper[4833]: I0217 13:51:25.398152 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-2ddlx" Feb 17 13:51:25 crc kubenswrapper[4833]: I0217 13:51:25.432364 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-2ddlx"] Feb 17 13:51:25 crc kubenswrapper[4833]: I0217 13:51:25.449032 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-2ddlx"] Feb 17 13:51:27 crc kubenswrapper[4833]: I0217 13:51:27.053101 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16569a9d-7677-455d-87c6-7b2fb504b731" path="/var/lib/kubelet/pods/16569a9d-7677-455d-87c6-7b2fb504b731/volumes" Feb 17 13:51:44 crc kubenswrapper[4833]: I0217 13:51:44.244280 4833 patch_prober.go:28] interesting pod/machine-config-daemon-nmzvl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 13:51:44 crc kubenswrapper[4833]: I0217 13:51:44.244964 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" podUID="f4a1ca83-1919-4f9c-82de-c849cbd50e70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 13:51:44 crc kubenswrapper[4833]: I0217 13:51:44.245018 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" Feb 17 13:51:44 crc kubenswrapper[4833]: I0217 13:51:44.245784 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"62b5aa8ef2aa9dc1ba827be3b12bbc4a526f52768d531f209957f02ab5874e23"} pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 13:51:44 crc kubenswrapper[4833]: I0217 13:51:44.245855 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" podUID="f4a1ca83-1919-4f9c-82de-c849cbd50e70" containerName="machine-config-daemon" containerID="cri-o://62b5aa8ef2aa9dc1ba827be3b12bbc4a526f52768d531f209957f02ab5874e23" gracePeriod=600 Feb 17 13:51:44 crc kubenswrapper[4833]: I0217 13:51:44.514823 4833 generic.go:334] "Generic (PLEG): container finished" podID="f4a1ca83-1919-4f9c-82de-c849cbd50e70" containerID="62b5aa8ef2aa9dc1ba827be3b12bbc4a526f52768d531f209957f02ab5874e23" exitCode=0 Feb 17 13:51:44 crc kubenswrapper[4833]: I0217 13:51:44.514846 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" event={"ID":"f4a1ca83-1919-4f9c-82de-c849cbd50e70","Type":"ContainerDied","Data":"62b5aa8ef2aa9dc1ba827be3b12bbc4a526f52768d531f209957f02ab5874e23"} Feb 17 13:51:44 crc kubenswrapper[4833]: I0217 13:51:44.515234 4833 scope.go:117] "RemoveContainer" containerID="89a1061eccd396579d2995d2d4ee3bd83f65f7e06a951b4d77897e687a7dcb78" Feb 17 13:51:45 crc kubenswrapper[4833]: I0217 13:51:45.534887 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" event={"ID":"f4a1ca83-1919-4f9c-82de-c849cbd50e70","Type":"ContainerStarted","Data":"3ac9e7c6a3aaeefcf86cef8591aaf0874f4e451b15c7e636ff29c4f41608981f"} Feb 17 13:53:44 crc kubenswrapper[4833]: I0217 13:53:44.243754 4833 patch_prober.go:28] interesting pod/machine-config-daemon-nmzvl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 13:53:44 crc kubenswrapper[4833]: I0217 13:53:44.244501 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" podUID="f4a1ca83-1919-4f9c-82de-c849cbd50e70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 13:54:11 crc kubenswrapper[4833]: I0217 13:54:11.367795 4833 scope.go:117] "RemoveContainer" containerID="c034575e4a954e33f8591823e7e7a086b15549f9611aeef632874d7d7d66743e" Feb 17 13:54:14 crc kubenswrapper[4833]: I0217 13:54:14.243867 4833 patch_prober.go:28] interesting pod/machine-config-daemon-nmzvl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 13:54:14 crc kubenswrapper[4833]: I0217 13:54:14.244703 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" podUID="f4a1ca83-1919-4f9c-82de-c849cbd50e70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 13:54:44 crc kubenswrapper[4833]: I0217 13:54:44.243913 4833 patch_prober.go:28] interesting pod/machine-config-daemon-nmzvl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 13:54:44 crc kubenswrapper[4833]: I0217 13:54:44.244989 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" podUID="f4a1ca83-1919-4f9c-82de-c849cbd50e70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 13:54:44 crc kubenswrapper[4833]: I0217 13:54:44.245117 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" Feb 17 13:54:44 crc kubenswrapper[4833]: I0217 13:54:44.246188 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3ac9e7c6a3aaeefcf86cef8591aaf0874f4e451b15c7e636ff29c4f41608981f"} pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 13:54:44 crc kubenswrapper[4833]: I0217 13:54:44.246316 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" podUID="f4a1ca83-1919-4f9c-82de-c849cbd50e70" containerName="machine-config-daemon" containerID="cri-o://3ac9e7c6a3aaeefcf86cef8591aaf0874f4e451b15c7e636ff29c4f41608981f" gracePeriod=600 Feb 17 13:54:44 crc kubenswrapper[4833]: I0217 13:54:44.803539 4833 generic.go:334] "Generic (PLEG): container finished" podID="f4a1ca83-1919-4f9c-82de-c849cbd50e70" containerID="3ac9e7c6a3aaeefcf86cef8591aaf0874f4e451b15c7e636ff29c4f41608981f" exitCode=0 Feb 17 13:54:44 crc kubenswrapper[4833]: I0217 13:54:44.804394 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" event={"ID":"f4a1ca83-1919-4f9c-82de-c849cbd50e70","Type":"ContainerDied","Data":"3ac9e7c6a3aaeefcf86cef8591aaf0874f4e451b15c7e636ff29c4f41608981f"} Feb 17 13:54:44 crc kubenswrapper[4833]: I0217 13:54:44.804429 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" event={"ID":"f4a1ca83-1919-4f9c-82de-c849cbd50e70","Type":"ContainerStarted","Data":"2987cca443b50c5381fbff8d4447cf8801984c777d16362c800622301c56c146"} Feb 17 13:54:44 crc kubenswrapper[4833]: I0217 13:54:44.804451 4833 scope.go:117] "RemoveContainer" containerID="62b5aa8ef2aa9dc1ba827be3b12bbc4a526f52768d531f209957f02ab5874e23" Feb 17 13:55:57 crc kubenswrapper[4833]: I0217 13:55:57.864370 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xr92c"] Feb 17 13:55:57 crc kubenswrapper[4833]: E0217 13:55:57.865000 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16569a9d-7677-455d-87c6-7b2fb504b731" containerName="registry" Feb 17 13:55:57 crc kubenswrapper[4833]: I0217 13:55:57.865012 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="16569a9d-7677-455d-87c6-7b2fb504b731" containerName="registry" Feb 17 13:55:57 crc kubenswrapper[4833]: I0217 13:55:57.865137 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="16569a9d-7677-455d-87c6-7b2fb504b731" containerName="registry" Feb 17 13:55:57 crc kubenswrapper[4833]: I0217 13:55:57.865809 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xr92c" Feb 17 13:55:57 crc kubenswrapper[4833]: I0217 13:55:57.868640 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 17 13:55:57 crc kubenswrapper[4833]: I0217 13:55:57.876231 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xr92c"] Feb 17 13:55:57 crc kubenswrapper[4833]: I0217 13:55:57.939109 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9a9d49dc-0f2e-43d3-8d5f-9cf11ef29520-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xr92c\" (UID: \"9a9d49dc-0f2e-43d3-8d5f-9cf11ef29520\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xr92c" Feb 17 13:55:57 crc kubenswrapper[4833]: I0217 13:55:57.939183 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9a9d49dc-0f2e-43d3-8d5f-9cf11ef29520-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xr92c\" (UID: \"9a9d49dc-0f2e-43d3-8d5f-9cf11ef29520\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xr92c" Feb 17 13:55:57 crc kubenswrapper[4833]: I0217 13:55:57.939215 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2r54\" (UniqueName: \"kubernetes.io/projected/9a9d49dc-0f2e-43d3-8d5f-9cf11ef29520-kube-api-access-b2r54\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xr92c\" (UID: \"9a9d49dc-0f2e-43d3-8d5f-9cf11ef29520\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xr92c" Feb 17 13:55:58 crc kubenswrapper[4833]: I0217 13:55:58.040829 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9a9d49dc-0f2e-43d3-8d5f-9cf11ef29520-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xr92c\" (UID: \"9a9d49dc-0f2e-43d3-8d5f-9cf11ef29520\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xr92c" Feb 17 13:55:58 crc kubenswrapper[4833]: I0217 13:55:58.040927 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9a9d49dc-0f2e-43d3-8d5f-9cf11ef29520-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xr92c\" (UID: \"9a9d49dc-0f2e-43d3-8d5f-9cf11ef29520\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xr92c" Feb 17 13:55:58 crc kubenswrapper[4833]: I0217 13:55:58.040976 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2r54\" (UniqueName: \"kubernetes.io/projected/9a9d49dc-0f2e-43d3-8d5f-9cf11ef29520-kube-api-access-b2r54\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xr92c\" (UID: \"9a9d49dc-0f2e-43d3-8d5f-9cf11ef29520\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xr92c" Feb 17 13:55:58 crc kubenswrapper[4833]: I0217 13:55:58.041848 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9a9d49dc-0f2e-43d3-8d5f-9cf11ef29520-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xr92c\" (UID: \"9a9d49dc-0f2e-43d3-8d5f-9cf11ef29520\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xr92c" Feb 17 13:55:58 crc kubenswrapper[4833]: I0217 13:55:58.041858 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9a9d49dc-0f2e-43d3-8d5f-9cf11ef29520-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xr92c\" (UID: \"9a9d49dc-0f2e-43d3-8d5f-9cf11ef29520\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xr92c" Feb 17 13:55:58 crc kubenswrapper[4833]: I0217 13:55:58.065229 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2r54\" (UniqueName: \"kubernetes.io/projected/9a9d49dc-0f2e-43d3-8d5f-9cf11ef29520-kube-api-access-b2r54\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xr92c\" (UID: \"9a9d49dc-0f2e-43d3-8d5f-9cf11ef29520\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xr92c" Feb 17 13:55:58 crc kubenswrapper[4833]: I0217 13:55:58.181746 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xr92c" Feb 17 13:55:58 crc kubenswrapper[4833]: I0217 13:55:58.415654 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xr92c"] Feb 17 13:55:59 crc kubenswrapper[4833]: I0217 13:55:59.292447 4833 generic.go:334] "Generic (PLEG): container finished" podID="9a9d49dc-0f2e-43d3-8d5f-9cf11ef29520" containerID="7ba7e8f7811582107046d605f7313cf23b89642bdf6a6e8ff3683d0b52b780c1" exitCode=0 Feb 17 13:55:59 crc kubenswrapper[4833]: I0217 13:55:59.292557 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xr92c" event={"ID":"9a9d49dc-0f2e-43d3-8d5f-9cf11ef29520","Type":"ContainerDied","Data":"7ba7e8f7811582107046d605f7313cf23b89642bdf6a6e8ff3683d0b52b780c1"} Feb 17 13:55:59 crc kubenswrapper[4833]: I0217 13:55:59.292804 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xr92c" event={"ID":"9a9d49dc-0f2e-43d3-8d5f-9cf11ef29520","Type":"ContainerStarted","Data":"b28da9b004553fa339bca8596049c065c5a6123d87d75fdbeaba298e73377fc8"} Feb 17 13:55:59 crc kubenswrapper[4833]: I0217 13:55:59.293926 4833 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 13:56:01 crc kubenswrapper[4833]: I0217 13:56:01.309891 4833 generic.go:334] "Generic (PLEG): container finished" podID="9a9d49dc-0f2e-43d3-8d5f-9cf11ef29520" containerID="45df41a77179f0320ece23926b50e9c442a1829af9350b4a3fb18cdbcba99414" exitCode=0 Feb 17 13:56:01 crc kubenswrapper[4833]: I0217 13:56:01.309988 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xr92c" event={"ID":"9a9d49dc-0f2e-43d3-8d5f-9cf11ef29520","Type":"ContainerDied","Data":"45df41a77179f0320ece23926b50e9c442a1829af9350b4a3fb18cdbcba99414"} Feb 17 13:56:02 crc kubenswrapper[4833]: I0217 13:56:02.317096 4833 generic.go:334] "Generic (PLEG): container finished" podID="9a9d49dc-0f2e-43d3-8d5f-9cf11ef29520" containerID="52a2facf3a391f0447978032a7147f20dc5c25adfa64730477cf77d66a6c9836" exitCode=0 Feb 17 13:56:02 crc kubenswrapper[4833]: I0217 13:56:02.317159 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xr92c" event={"ID":"9a9d49dc-0f2e-43d3-8d5f-9cf11ef29520","Type":"ContainerDied","Data":"52a2facf3a391f0447978032a7147f20dc5c25adfa64730477cf77d66a6c9836"} Feb 17 13:56:03 crc kubenswrapper[4833]: I0217 13:56:03.536409 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xr92c" Feb 17 13:56:03 crc kubenswrapper[4833]: I0217 13:56:03.611198 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9a9d49dc-0f2e-43d3-8d5f-9cf11ef29520-util\") pod \"9a9d49dc-0f2e-43d3-8d5f-9cf11ef29520\" (UID: \"9a9d49dc-0f2e-43d3-8d5f-9cf11ef29520\") " Feb 17 13:56:03 crc kubenswrapper[4833]: I0217 13:56:03.611274 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2r54\" (UniqueName: \"kubernetes.io/projected/9a9d49dc-0f2e-43d3-8d5f-9cf11ef29520-kube-api-access-b2r54\") pod \"9a9d49dc-0f2e-43d3-8d5f-9cf11ef29520\" (UID: \"9a9d49dc-0f2e-43d3-8d5f-9cf11ef29520\") " Feb 17 13:56:03 crc kubenswrapper[4833]: I0217 13:56:03.611353 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9a9d49dc-0f2e-43d3-8d5f-9cf11ef29520-bundle\") pod \"9a9d49dc-0f2e-43d3-8d5f-9cf11ef29520\" (UID: \"9a9d49dc-0f2e-43d3-8d5f-9cf11ef29520\") " Feb 17 13:56:03 crc kubenswrapper[4833]: I0217 13:56:03.613328 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a9d49dc-0f2e-43d3-8d5f-9cf11ef29520-bundle" (OuterVolumeSpecName: "bundle") pod "9a9d49dc-0f2e-43d3-8d5f-9cf11ef29520" (UID: "9a9d49dc-0f2e-43d3-8d5f-9cf11ef29520"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:56:03 crc kubenswrapper[4833]: I0217 13:56:03.616686 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a9d49dc-0f2e-43d3-8d5f-9cf11ef29520-kube-api-access-b2r54" (OuterVolumeSpecName: "kube-api-access-b2r54") pod "9a9d49dc-0f2e-43d3-8d5f-9cf11ef29520" (UID: "9a9d49dc-0f2e-43d3-8d5f-9cf11ef29520"). InnerVolumeSpecName "kube-api-access-b2r54". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:56:03 crc kubenswrapper[4833]: I0217 13:56:03.625729 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a9d49dc-0f2e-43d3-8d5f-9cf11ef29520-util" (OuterVolumeSpecName: "util") pod "9a9d49dc-0f2e-43d3-8d5f-9cf11ef29520" (UID: "9a9d49dc-0f2e-43d3-8d5f-9cf11ef29520"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:56:03 crc kubenswrapper[4833]: I0217 13:56:03.712843 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b2r54\" (UniqueName: \"kubernetes.io/projected/9a9d49dc-0f2e-43d3-8d5f-9cf11ef29520-kube-api-access-b2r54\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:03 crc kubenswrapper[4833]: I0217 13:56:03.712886 4833 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9a9d49dc-0f2e-43d3-8d5f-9cf11ef29520-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:03 crc kubenswrapper[4833]: I0217 13:56:03.712896 4833 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9a9d49dc-0f2e-43d3-8d5f-9cf11ef29520-util\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:04 crc kubenswrapper[4833]: I0217 13:56:04.328825 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xr92c" event={"ID":"9a9d49dc-0f2e-43d3-8d5f-9cf11ef29520","Type":"ContainerDied","Data":"b28da9b004553fa339bca8596049c065c5a6123d87d75fdbeaba298e73377fc8"} Feb 17 13:56:04 crc kubenswrapper[4833]: I0217 13:56:04.329232 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b28da9b004553fa339bca8596049c065c5a6123d87d75fdbeaba298e73377fc8" Feb 17 13:56:04 crc kubenswrapper[4833]: I0217 13:56:04.328911 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xr92c" Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.202120 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-7r9gt"] Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.202786 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" podUID="72c5918a-056f-446c-b138-a1be7140a5b0" containerName="ovn-controller" containerID="cri-o://42e357a961b833cb840c8b33ce0db65d57cc5fb37789b00a6bcd3eb67d8e33f2" gracePeriod=30 Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.203169 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" podUID="72c5918a-056f-446c-b138-a1be7140a5b0" containerName="ovn-acl-logging" containerID="cri-o://ed9da7ba3cbbca328926f2372a7e581793373ec876c961f7988617c668d958b7" gracePeriod=30 Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.203182 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" podUID="72c5918a-056f-446c-b138-a1be7140a5b0" containerName="northd" containerID="cri-o://4ee6e033fa9b29f59b186e39ec83d162b40e36aab28bb1e568a54c351a8c0c79" gracePeriod=30 Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.203209 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" podUID="72c5918a-056f-446c-b138-a1be7140a5b0" containerName="kube-rbac-proxy-node" containerID="cri-o://e0c4e266b444d8ac1a67f9b5a6705f4b871e3930daf7707cd30b589a4cce24ec" gracePeriod=30 Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.203297 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" podUID="72c5918a-056f-446c-b138-a1be7140a5b0" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://a89a54d9bedad8afe45f34ce0bc2324d0ca8590c328aad9d1557b2d92e5b81fc" gracePeriod=30 Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.203413 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" podUID="72c5918a-056f-446c-b138-a1be7140a5b0" containerName="sbdb" containerID="cri-o://8fdc1a29a9f5b3be51beb5d28d7a09f671e4f9effeb9ed122da196da8623abe2" gracePeriod=30 Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.203449 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" podUID="72c5918a-056f-446c-b138-a1be7140a5b0" containerName="nbdb" containerID="cri-o://f5c74f469d2d07cfd265ee1f625b957efc0f9d1f0efd5f483c962abdfd66a080" gracePeriod=30 Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.270706 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" podUID="72c5918a-056f-446c-b138-a1be7140a5b0" containerName="ovnkube-controller" containerID="cri-o://a2614c0cf9a6f0902ef7c50122eea59027e4caf925e68609637d981b8a9837a9" gracePeriod=30 Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.360955 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7r9gt_72c5918a-056f-446c-b138-a1be7140a5b0/ovnkube-controller/3.log" Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.362909 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7r9gt_72c5918a-056f-446c-b138-a1be7140a5b0/ovn-acl-logging/0.log" Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.365642 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7r9gt_72c5918a-056f-446c-b138-a1be7140a5b0/ovn-controller/0.log" Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.366080 4833 generic.go:334] "Generic (PLEG): container finished" podID="72c5918a-056f-446c-b138-a1be7140a5b0" containerID="a89a54d9bedad8afe45f34ce0bc2324d0ca8590c328aad9d1557b2d92e5b81fc" exitCode=0 Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.366119 4833 generic.go:334] "Generic (PLEG): container finished" podID="72c5918a-056f-446c-b138-a1be7140a5b0" containerID="e0c4e266b444d8ac1a67f9b5a6705f4b871e3930daf7707cd30b589a4cce24ec" exitCode=0 Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.366128 4833 generic.go:334] "Generic (PLEG): container finished" podID="72c5918a-056f-446c-b138-a1be7140a5b0" containerID="ed9da7ba3cbbca328926f2372a7e581793373ec876c961f7988617c668d958b7" exitCode=143 Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.366134 4833 generic.go:334] "Generic (PLEG): container finished" podID="72c5918a-056f-446c-b138-a1be7140a5b0" containerID="42e357a961b833cb840c8b33ce0db65d57cc5fb37789b00a6bcd3eb67d8e33f2" exitCode=143 Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.366155 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" event={"ID":"72c5918a-056f-446c-b138-a1be7140a5b0","Type":"ContainerDied","Data":"a89a54d9bedad8afe45f34ce0bc2324d0ca8590c328aad9d1557b2d92e5b81fc"} Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.366194 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" event={"ID":"72c5918a-056f-446c-b138-a1be7140a5b0","Type":"ContainerDied","Data":"e0c4e266b444d8ac1a67f9b5a6705f4b871e3930daf7707cd30b589a4cce24ec"} Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.366219 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" event={"ID":"72c5918a-056f-446c-b138-a1be7140a5b0","Type":"ContainerDied","Data":"ed9da7ba3cbbca328926f2372a7e581793373ec876c961f7988617c668d958b7"} Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.366232 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" event={"ID":"72c5918a-056f-446c-b138-a1be7140a5b0","Type":"ContainerDied","Data":"42e357a961b833cb840c8b33ce0db65d57cc5fb37789b00a6bcd3eb67d8e33f2"} Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.367768 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wlt4c_a3b8d3ca-f768-4129-9c1a-b4866dd852d4/kube-multus/2.log" Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.368173 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wlt4c_a3b8d3ca-f768-4129-9c1a-b4866dd852d4/kube-multus/1.log" Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.368200 4833 generic.go:334] "Generic (PLEG): container finished" podID="a3b8d3ca-f768-4129-9c1a-b4866dd852d4" containerID="11b83835c273f377e2c85db9ff37901aa2d246ce6673d32ff9925526757a98b3" exitCode=2 Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.368223 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-wlt4c" event={"ID":"a3b8d3ca-f768-4129-9c1a-b4866dd852d4","Type":"ContainerDied","Data":"11b83835c273f377e2c85db9ff37901aa2d246ce6673d32ff9925526757a98b3"} Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.368246 4833 scope.go:117] "RemoveContainer" containerID="efd76798e54bfcbad6d3a5f07396fe8579adcdb3d5bab3c303a9d31ad242e830" Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.368777 4833 scope.go:117] "RemoveContainer" containerID="11b83835c273f377e2c85db9ff37901aa2d246ce6673d32ff9925526757a98b3" Feb 17 13:56:09 crc kubenswrapper[4833]: E0217 13:56:09.368993 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-wlt4c_openshift-multus(a3b8d3ca-f768-4129-9c1a-b4866dd852d4)\"" pod="openshift-multus/multus-wlt4c" podUID="a3b8d3ca-f768-4129-9c1a-b4866dd852d4" Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.894488 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7r9gt_72c5918a-056f-446c-b138-a1be7140a5b0/ovnkube-controller/3.log" Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.897728 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7r9gt_72c5918a-056f-446c-b138-a1be7140a5b0/ovn-acl-logging/0.log" Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.898446 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7r9gt_72c5918a-056f-446c-b138-a1be7140a5b0/ovn-controller/0.log" Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.899155 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.992026 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-run-systemd\") pod \"72c5918a-056f-446c-b138-a1be7140a5b0\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.992087 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-host-run-netns\") pod \"72c5918a-056f-446c-b138-a1be7140a5b0\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.992108 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-systemd-units\") pod \"72c5918a-056f-446c-b138-a1be7140a5b0\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.992137 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/72c5918a-056f-446c-b138-a1be7140a5b0-ovnkube-script-lib\") pod \"72c5918a-056f-446c-b138-a1be7140a5b0\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.992184 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxg4p\" (UniqueName: \"kubernetes.io/projected/72c5918a-056f-446c-b138-a1be7140a5b0-kube-api-access-wxg4p\") pod \"72c5918a-056f-446c-b138-a1be7140a5b0\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.992201 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-host-kubelet\") pod \"72c5918a-056f-446c-b138-a1be7140a5b0\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.992217 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-var-lib-openvswitch\") pod \"72c5918a-056f-446c-b138-a1be7140a5b0\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.992232 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-run-ovn\") pod \"72c5918a-056f-446c-b138-a1be7140a5b0\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.992231 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "72c5918a-056f-446c-b138-a1be7140a5b0" (UID: "72c5918a-056f-446c-b138-a1be7140a5b0"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.992262 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"72c5918a-056f-446c-b138-a1be7140a5b0\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.992257 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "72c5918a-056f-446c-b138-a1be7140a5b0" (UID: "72c5918a-056f-446c-b138-a1be7140a5b0"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.992295 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "72c5918a-056f-446c-b138-a1be7140a5b0" (UID: "72c5918a-056f-446c-b138-a1be7140a5b0"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.992290 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-run-openvswitch\") pod \"72c5918a-056f-446c-b138-a1be7140a5b0\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.992337 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "72c5918a-056f-446c-b138-a1be7140a5b0" (UID: "72c5918a-056f-446c-b138-a1be7140a5b0"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.992366 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-host-run-ovn-kubernetes\") pod \"72c5918a-056f-446c-b138-a1be7140a5b0\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.992386 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-host-cni-netd\") pod \"72c5918a-056f-446c-b138-a1be7140a5b0\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.992406 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-etc-openvswitch\") pod \"72c5918a-056f-446c-b138-a1be7140a5b0\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.992424 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/72c5918a-056f-446c-b138-a1be7140a5b0-ovnkube-config\") pod \"72c5918a-056f-446c-b138-a1be7140a5b0\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.992449 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-host-cni-bin\") pod \"72c5918a-056f-446c-b138-a1be7140a5b0\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.992368 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "72c5918a-056f-446c-b138-a1be7140a5b0" (UID: "72c5918a-056f-446c-b138-a1be7140a5b0"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.992464 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-node-log\") pod \"72c5918a-056f-446c-b138-a1be7140a5b0\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.992479 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-log-socket\") pod \"72c5918a-056f-446c-b138-a1be7140a5b0\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.992513 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-host-slash\") pod \"72c5918a-056f-446c-b138-a1be7140a5b0\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.992528 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/72c5918a-056f-446c-b138-a1be7140a5b0-ovn-node-metrics-cert\") pod \"72c5918a-056f-446c-b138-a1be7140a5b0\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.992553 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/72c5918a-056f-446c-b138-a1be7140a5b0-env-overrides\") pod \"72c5918a-056f-446c-b138-a1be7140a5b0\" (UID: \"72c5918a-056f-446c-b138-a1be7140a5b0\") " Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.992384 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "72c5918a-056f-446c-b138-a1be7140a5b0" (UID: "72c5918a-056f-446c-b138-a1be7140a5b0"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.992398 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "72c5918a-056f-446c-b138-a1be7140a5b0" (UID: "72c5918a-056f-446c-b138-a1be7140a5b0"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.992420 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "72c5918a-056f-446c-b138-a1be7140a5b0" (UID: "72c5918a-056f-446c-b138-a1be7140a5b0"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.992438 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "72c5918a-056f-446c-b138-a1be7140a5b0" (UID: "72c5918a-056f-446c-b138-a1be7140a5b0"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.992451 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "72c5918a-056f-446c-b138-a1be7140a5b0" (UID: "72c5918a-056f-446c-b138-a1be7140a5b0"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.992668 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72c5918a-056f-446c-b138-a1be7140a5b0-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "72c5918a-056f-446c-b138-a1be7140a5b0" (UID: "72c5918a-056f-446c-b138-a1be7140a5b0"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.992722 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "72c5918a-056f-446c-b138-a1be7140a5b0" (UID: "72c5918a-056f-446c-b138-a1be7140a5b0"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.992721 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-host-slash" (OuterVolumeSpecName: "host-slash") pod "72c5918a-056f-446c-b138-a1be7140a5b0" (UID: "72c5918a-056f-446c-b138-a1be7140a5b0"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.992742 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-node-log" (OuterVolumeSpecName: "node-log") pod "72c5918a-056f-446c-b138-a1be7140a5b0" (UID: "72c5918a-056f-446c-b138-a1be7140a5b0"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.992750 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-log-socket" (OuterVolumeSpecName: "log-socket") pod "72c5918a-056f-446c-b138-a1be7140a5b0" (UID: "72c5918a-056f-446c-b138-a1be7140a5b0"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.992771 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72c5918a-056f-446c-b138-a1be7140a5b0-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "72c5918a-056f-446c-b138-a1be7140a5b0" (UID: "72c5918a-056f-446c-b138-a1be7140a5b0"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.992885 4833 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.992897 4833 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.992905 4833 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/72c5918a-056f-446c-b138-a1be7140a5b0-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.992913 4833 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.992922 4833 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.992929 4833 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.992938 4833 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.992947 4833 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.992958 4833 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.992967 4833 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.992976 4833 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.992984 4833 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/72c5918a-056f-446c-b138-a1be7140a5b0-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.992992 4833 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-node-log\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.992999 4833 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.993006 4833 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-log-socket\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.993008 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72c5918a-056f-446c-b138-a1be7140a5b0-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "72c5918a-056f-446c-b138-a1be7140a5b0" (UID: "72c5918a-056f-446c-b138-a1be7140a5b0"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.993014 4833 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-host-slash\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:09 crc kubenswrapper[4833]: I0217 13:56:09.998912 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72c5918a-056f-446c-b138-a1be7140a5b0-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "72c5918a-056f-446c-b138-a1be7140a5b0" (UID: "72c5918a-056f-446c-b138-a1be7140a5b0"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.000848 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72c5918a-056f-446c-b138-a1be7140a5b0-kube-api-access-wxg4p" (OuterVolumeSpecName: "kube-api-access-wxg4p") pod "72c5918a-056f-446c-b138-a1be7140a5b0" (UID: "72c5918a-056f-446c-b138-a1be7140a5b0"). InnerVolumeSpecName "kube-api-access-wxg4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.010993 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-lm5vc"] Feb 17 13:56:10 crc kubenswrapper[4833]: E0217 13:56:10.011504 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72c5918a-056f-446c-b138-a1be7140a5b0" containerName="sbdb" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.011559 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="72c5918a-056f-446c-b138-a1be7140a5b0" containerName="sbdb" Feb 17 13:56:10 crc kubenswrapper[4833]: E0217 13:56:10.011576 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a9d49dc-0f2e-43d3-8d5f-9cf11ef29520" containerName="util" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.011583 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a9d49dc-0f2e-43d3-8d5f-9cf11ef29520" containerName="util" Feb 17 13:56:10 crc kubenswrapper[4833]: E0217 13:56:10.011592 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72c5918a-056f-446c-b138-a1be7140a5b0" containerName="ovnkube-controller" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.011603 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="72c5918a-056f-446c-b138-a1be7140a5b0" containerName="ovnkube-controller" Feb 17 13:56:10 crc kubenswrapper[4833]: E0217 13:56:10.011609 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72c5918a-056f-446c-b138-a1be7140a5b0" containerName="kubecfg-setup" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.011615 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="72c5918a-056f-446c-b138-a1be7140a5b0" containerName="kubecfg-setup" Feb 17 13:56:10 crc kubenswrapper[4833]: E0217 13:56:10.011630 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72c5918a-056f-446c-b138-a1be7140a5b0" containerName="ovnkube-controller" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.011637 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="72c5918a-056f-446c-b138-a1be7140a5b0" containerName="ovnkube-controller" Feb 17 13:56:10 crc kubenswrapper[4833]: E0217 13:56:10.011649 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72c5918a-056f-446c-b138-a1be7140a5b0" containerName="nbdb" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.011656 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="72c5918a-056f-446c-b138-a1be7140a5b0" containerName="nbdb" Feb 17 13:56:10 crc kubenswrapper[4833]: E0217 13:56:10.011668 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72c5918a-056f-446c-b138-a1be7140a5b0" containerName="ovnkube-controller" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.011675 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="72c5918a-056f-446c-b138-a1be7140a5b0" containerName="ovnkube-controller" Feb 17 13:56:10 crc kubenswrapper[4833]: E0217 13:56:10.011688 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72c5918a-056f-446c-b138-a1be7140a5b0" containerName="ovnkube-controller" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.011693 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="72c5918a-056f-446c-b138-a1be7140a5b0" containerName="ovnkube-controller" Feb 17 13:56:10 crc kubenswrapper[4833]: E0217 13:56:10.011704 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72c5918a-056f-446c-b138-a1be7140a5b0" containerName="ovn-acl-logging" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.011710 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="72c5918a-056f-446c-b138-a1be7140a5b0" containerName="ovn-acl-logging" Feb 17 13:56:10 crc kubenswrapper[4833]: E0217 13:56:10.011720 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a9d49dc-0f2e-43d3-8d5f-9cf11ef29520" containerName="extract" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.011725 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a9d49dc-0f2e-43d3-8d5f-9cf11ef29520" containerName="extract" Feb 17 13:56:10 crc kubenswrapper[4833]: E0217 13:56:10.011770 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a9d49dc-0f2e-43d3-8d5f-9cf11ef29520" containerName="pull" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.011776 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a9d49dc-0f2e-43d3-8d5f-9cf11ef29520" containerName="pull" Feb 17 13:56:10 crc kubenswrapper[4833]: E0217 13:56:10.011784 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72c5918a-056f-446c-b138-a1be7140a5b0" containerName="kube-rbac-proxy-ovn-metrics" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.011790 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="72c5918a-056f-446c-b138-a1be7140a5b0" containerName="kube-rbac-proxy-ovn-metrics" Feb 17 13:56:10 crc kubenswrapper[4833]: E0217 13:56:10.011800 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72c5918a-056f-446c-b138-a1be7140a5b0" containerName="kube-rbac-proxy-node" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.011806 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="72c5918a-056f-446c-b138-a1be7140a5b0" containerName="kube-rbac-proxy-node" Feb 17 13:56:10 crc kubenswrapper[4833]: E0217 13:56:10.011818 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72c5918a-056f-446c-b138-a1be7140a5b0" containerName="ovn-controller" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.011824 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="72c5918a-056f-446c-b138-a1be7140a5b0" containerName="ovn-controller" Feb 17 13:56:10 crc kubenswrapper[4833]: E0217 13:56:10.011835 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72c5918a-056f-446c-b138-a1be7140a5b0" containerName="northd" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.011841 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="72c5918a-056f-446c-b138-a1be7140a5b0" containerName="northd" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.012591 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="72c5918a-056f-446c-b138-a1be7140a5b0" containerName="kube-rbac-proxy-ovn-metrics" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.012608 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="72c5918a-056f-446c-b138-a1be7140a5b0" containerName="kube-rbac-proxy-node" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.012624 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="72c5918a-056f-446c-b138-a1be7140a5b0" containerName="northd" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.012641 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="72c5918a-056f-446c-b138-a1be7140a5b0" containerName="ovnkube-controller" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.012648 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="72c5918a-056f-446c-b138-a1be7140a5b0" containerName="ovnkube-controller" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.012654 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a9d49dc-0f2e-43d3-8d5f-9cf11ef29520" containerName="extract" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.012666 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="72c5918a-056f-446c-b138-a1be7140a5b0" containerName="ovn-controller" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.012674 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="72c5918a-056f-446c-b138-a1be7140a5b0" containerName="nbdb" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.012682 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="72c5918a-056f-446c-b138-a1be7140a5b0" containerName="sbdb" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.012695 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="72c5918a-056f-446c-b138-a1be7140a5b0" containerName="ovn-acl-logging" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.012707 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="72c5918a-056f-446c-b138-a1be7140a5b0" containerName="ovnkube-controller" Feb 17 13:56:10 crc kubenswrapper[4833]: E0217 13:56:10.013169 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72c5918a-056f-446c-b138-a1be7140a5b0" containerName="ovnkube-controller" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.013186 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="72c5918a-056f-446c-b138-a1be7140a5b0" containerName="ovnkube-controller" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.013734 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="72c5918a-056f-446c-b138-a1be7140a5b0" containerName="ovnkube-controller" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.013756 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="72c5918a-056f-446c-b138-a1be7140a5b0" containerName="ovnkube-controller" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.022484 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "72c5918a-056f-446c-b138-a1be7140a5b0" (UID: "72c5918a-056f-446c-b138-a1be7140a5b0"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.022960 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.093792 4833 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/72c5918a-056f-446c-b138-a1be7140a5b0-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.093818 4833 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/72c5918a-056f-446c-b138-a1be7140a5b0-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.093829 4833 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/72c5918a-056f-446c-b138-a1be7140a5b0-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.093838 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxg4p\" (UniqueName: \"kubernetes.io/projected/72c5918a-056f-446c-b138-a1be7140a5b0-kube-api-access-wxg4p\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.195290 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8a7e5f75-0918-4179-ba63-0daa9ceaf7e5-host-cni-netd\") pod \"ovnkube-node-lm5vc\" (UID: \"8a7e5f75-0918-4179-ba63-0daa9ceaf7e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.195353 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8a7e5f75-0918-4179-ba63-0daa9ceaf7e5-log-socket\") pod \"ovnkube-node-lm5vc\" (UID: \"8a7e5f75-0918-4179-ba63-0daa9ceaf7e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.195376 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8a7e5f75-0918-4179-ba63-0daa9ceaf7e5-etc-openvswitch\") pod \"ovnkube-node-lm5vc\" (UID: \"8a7e5f75-0918-4179-ba63-0daa9ceaf7e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.195400 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8a7e5f75-0918-4179-ba63-0daa9ceaf7e5-ovnkube-config\") pod \"ovnkube-node-lm5vc\" (UID: \"8a7e5f75-0918-4179-ba63-0daa9ceaf7e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.195454 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8a7e5f75-0918-4179-ba63-0daa9ceaf7e5-host-slash\") pod \"ovnkube-node-lm5vc\" (UID: \"8a7e5f75-0918-4179-ba63-0daa9ceaf7e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.195483 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8a7e5f75-0918-4179-ba63-0daa9ceaf7e5-host-kubelet\") pod \"ovnkube-node-lm5vc\" (UID: \"8a7e5f75-0918-4179-ba63-0daa9ceaf7e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.195550 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8a7e5f75-0918-4179-ba63-0daa9ceaf7e5-var-lib-openvswitch\") pod \"ovnkube-node-lm5vc\" (UID: \"8a7e5f75-0918-4179-ba63-0daa9ceaf7e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.195606 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8a7e5f75-0918-4179-ba63-0daa9ceaf7e5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-lm5vc\" (UID: \"8a7e5f75-0918-4179-ba63-0daa9ceaf7e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.195641 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8a7e5f75-0918-4179-ba63-0daa9ceaf7e5-ovn-node-metrics-cert\") pod \"ovnkube-node-lm5vc\" (UID: \"8a7e5f75-0918-4179-ba63-0daa9ceaf7e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.195663 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8a7e5f75-0918-4179-ba63-0daa9ceaf7e5-run-openvswitch\") pod \"ovnkube-node-lm5vc\" (UID: \"8a7e5f75-0918-4179-ba63-0daa9ceaf7e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.195678 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8a7e5f75-0918-4179-ba63-0daa9ceaf7e5-env-overrides\") pod \"ovnkube-node-lm5vc\" (UID: \"8a7e5f75-0918-4179-ba63-0daa9ceaf7e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.195700 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8a7e5f75-0918-4179-ba63-0daa9ceaf7e5-host-run-ovn-kubernetes\") pod \"ovnkube-node-lm5vc\" (UID: \"8a7e5f75-0918-4179-ba63-0daa9ceaf7e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.195724 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8a7e5f75-0918-4179-ba63-0daa9ceaf7e5-host-cni-bin\") pod \"ovnkube-node-lm5vc\" (UID: \"8a7e5f75-0918-4179-ba63-0daa9ceaf7e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.195744 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8a7e5f75-0918-4179-ba63-0daa9ceaf7e5-run-ovn\") pod \"ovnkube-node-lm5vc\" (UID: \"8a7e5f75-0918-4179-ba63-0daa9ceaf7e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.195762 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8a7e5f75-0918-4179-ba63-0daa9ceaf7e5-run-systemd\") pod \"ovnkube-node-lm5vc\" (UID: \"8a7e5f75-0918-4179-ba63-0daa9ceaf7e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.195780 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8a7e5f75-0918-4179-ba63-0daa9ceaf7e5-host-run-netns\") pod \"ovnkube-node-lm5vc\" (UID: \"8a7e5f75-0918-4179-ba63-0daa9ceaf7e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.195840 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8a7e5f75-0918-4179-ba63-0daa9ceaf7e5-node-log\") pod \"ovnkube-node-lm5vc\" (UID: \"8a7e5f75-0918-4179-ba63-0daa9ceaf7e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.195872 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8a7e5f75-0918-4179-ba63-0daa9ceaf7e5-ovnkube-script-lib\") pod \"ovnkube-node-lm5vc\" (UID: \"8a7e5f75-0918-4179-ba63-0daa9ceaf7e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.195910 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8a7e5f75-0918-4179-ba63-0daa9ceaf7e5-systemd-units\") pod \"ovnkube-node-lm5vc\" (UID: \"8a7e5f75-0918-4179-ba63-0daa9ceaf7e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.195926 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9thf8\" (UniqueName: \"kubernetes.io/projected/8a7e5f75-0918-4179-ba63-0daa9ceaf7e5-kube-api-access-9thf8\") pod \"ovnkube-node-lm5vc\" (UID: \"8a7e5f75-0918-4179-ba63-0daa9ceaf7e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.297004 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8a7e5f75-0918-4179-ba63-0daa9ceaf7e5-systemd-units\") pod \"ovnkube-node-lm5vc\" (UID: \"8a7e5f75-0918-4179-ba63-0daa9ceaf7e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.297064 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9thf8\" (UniqueName: \"kubernetes.io/projected/8a7e5f75-0918-4179-ba63-0daa9ceaf7e5-kube-api-access-9thf8\") pod \"ovnkube-node-lm5vc\" (UID: \"8a7e5f75-0918-4179-ba63-0daa9ceaf7e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.297106 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8a7e5f75-0918-4179-ba63-0daa9ceaf7e5-host-cni-netd\") pod \"ovnkube-node-lm5vc\" (UID: \"8a7e5f75-0918-4179-ba63-0daa9ceaf7e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.297126 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8a7e5f75-0918-4179-ba63-0daa9ceaf7e5-etc-openvswitch\") pod \"ovnkube-node-lm5vc\" (UID: \"8a7e5f75-0918-4179-ba63-0daa9ceaf7e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.297128 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8a7e5f75-0918-4179-ba63-0daa9ceaf7e5-systemd-units\") pod \"ovnkube-node-lm5vc\" (UID: \"8a7e5f75-0918-4179-ba63-0daa9ceaf7e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.297190 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8a7e5f75-0918-4179-ba63-0daa9ceaf7e5-etc-openvswitch\") pod \"ovnkube-node-lm5vc\" (UID: \"8a7e5f75-0918-4179-ba63-0daa9ceaf7e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.297226 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8a7e5f75-0918-4179-ba63-0daa9ceaf7e5-host-cni-netd\") pod \"ovnkube-node-lm5vc\" (UID: \"8a7e5f75-0918-4179-ba63-0daa9ceaf7e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.297192 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8a7e5f75-0918-4179-ba63-0daa9ceaf7e5-log-socket\") pod \"ovnkube-node-lm5vc\" (UID: \"8a7e5f75-0918-4179-ba63-0daa9ceaf7e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.297142 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8a7e5f75-0918-4179-ba63-0daa9ceaf7e5-log-socket\") pod \"ovnkube-node-lm5vc\" (UID: \"8a7e5f75-0918-4179-ba63-0daa9ceaf7e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.297342 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8a7e5f75-0918-4179-ba63-0daa9ceaf7e5-ovnkube-config\") pod \"ovnkube-node-lm5vc\" (UID: \"8a7e5f75-0918-4179-ba63-0daa9ceaf7e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.297364 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8a7e5f75-0918-4179-ba63-0daa9ceaf7e5-host-slash\") pod \"ovnkube-node-lm5vc\" (UID: \"8a7e5f75-0918-4179-ba63-0daa9ceaf7e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.297390 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8a7e5f75-0918-4179-ba63-0daa9ceaf7e5-host-kubelet\") pod \"ovnkube-node-lm5vc\" (UID: \"8a7e5f75-0918-4179-ba63-0daa9ceaf7e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.297408 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8a7e5f75-0918-4179-ba63-0daa9ceaf7e5-var-lib-openvswitch\") pod \"ovnkube-node-lm5vc\" (UID: \"8a7e5f75-0918-4179-ba63-0daa9ceaf7e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.297431 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8a7e5f75-0918-4179-ba63-0daa9ceaf7e5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-lm5vc\" (UID: \"8a7e5f75-0918-4179-ba63-0daa9ceaf7e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.297471 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8a7e5f75-0918-4179-ba63-0daa9ceaf7e5-host-kubelet\") pod \"ovnkube-node-lm5vc\" (UID: \"8a7e5f75-0918-4179-ba63-0daa9ceaf7e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.297474 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8a7e5f75-0918-4179-ba63-0daa9ceaf7e5-ovn-node-metrics-cert\") pod \"ovnkube-node-lm5vc\" (UID: \"8a7e5f75-0918-4179-ba63-0daa9ceaf7e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.297505 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8a7e5f75-0918-4179-ba63-0daa9ceaf7e5-run-openvswitch\") pod \"ovnkube-node-lm5vc\" (UID: \"8a7e5f75-0918-4179-ba63-0daa9ceaf7e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.297521 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8a7e5f75-0918-4179-ba63-0daa9ceaf7e5-env-overrides\") pod \"ovnkube-node-lm5vc\" (UID: \"8a7e5f75-0918-4179-ba63-0daa9ceaf7e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.297538 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8a7e5f75-0918-4179-ba63-0daa9ceaf7e5-host-run-ovn-kubernetes\") pod \"ovnkube-node-lm5vc\" (UID: \"8a7e5f75-0918-4179-ba63-0daa9ceaf7e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.297556 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8a7e5f75-0918-4179-ba63-0daa9ceaf7e5-run-ovn\") pod \"ovnkube-node-lm5vc\" (UID: \"8a7e5f75-0918-4179-ba63-0daa9ceaf7e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.297569 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8a7e5f75-0918-4179-ba63-0daa9ceaf7e5-host-cni-bin\") pod \"ovnkube-node-lm5vc\" (UID: \"8a7e5f75-0918-4179-ba63-0daa9ceaf7e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.297588 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8a7e5f75-0918-4179-ba63-0daa9ceaf7e5-run-systemd\") pod \"ovnkube-node-lm5vc\" (UID: \"8a7e5f75-0918-4179-ba63-0daa9ceaf7e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.297603 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8a7e5f75-0918-4179-ba63-0daa9ceaf7e5-host-run-netns\") pod \"ovnkube-node-lm5vc\" (UID: \"8a7e5f75-0918-4179-ba63-0daa9ceaf7e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.297624 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8a7e5f75-0918-4179-ba63-0daa9ceaf7e5-node-log\") pod \"ovnkube-node-lm5vc\" (UID: \"8a7e5f75-0918-4179-ba63-0daa9ceaf7e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.297644 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8a7e5f75-0918-4179-ba63-0daa9ceaf7e5-ovnkube-script-lib\") pod \"ovnkube-node-lm5vc\" (UID: \"8a7e5f75-0918-4179-ba63-0daa9ceaf7e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.298017 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8a7e5f75-0918-4179-ba63-0daa9ceaf7e5-var-lib-openvswitch\") pod \"ovnkube-node-lm5vc\" (UID: \"8a7e5f75-0918-4179-ba63-0daa9ceaf7e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.298078 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8a7e5f75-0918-4179-ba63-0daa9ceaf7e5-ovnkube-config\") pod \"ovnkube-node-lm5vc\" (UID: \"8a7e5f75-0918-4179-ba63-0daa9ceaf7e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.298084 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8a7e5f75-0918-4179-ba63-0daa9ceaf7e5-run-ovn\") pod \"ovnkube-node-lm5vc\" (UID: \"8a7e5f75-0918-4179-ba63-0daa9ceaf7e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.298113 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8a7e5f75-0918-4179-ba63-0daa9ceaf7e5-host-slash\") pod \"ovnkube-node-lm5vc\" (UID: \"8a7e5f75-0918-4179-ba63-0daa9ceaf7e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.298139 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8a7e5f75-0918-4179-ba63-0daa9ceaf7e5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-lm5vc\" (UID: \"8a7e5f75-0918-4179-ba63-0daa9ceaf7e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.298157 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8a7e5f75-0918-4179-ba63-0daa9ceaf7e5-run-openvswitch\") pod \"ovnkube-node-lm5vc\" (UID: \"8a7e5f75-0918-4179-ba63-0daa9ceaf7e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.298180 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8a7e5f75-0918-4179-ba63-0daa9ceaf7e5-run-systemd\") pod \"ovnkube-node-lm5vc\" (UID: \"8a7e5f75-0918-4179-ba63-0daa9ceaf7e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.298215 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8a7e5f75-0918-4179-ba63-0daa9ceaf7e5-host-cni-bin\") pod \"ovnkube-node-lm5vc\" (UID: \"8a7e5f75-0918-4179-ba63-0daa9ceaf7e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.298234 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8a7e5f75-0918-4179-ba63-0daa9ceaf7e5-ovnkube-script-lib\") pod \"ovnkube-node-lm5vc\" (UID: \"8a7e5f75-0918-4179-ba63-0daa9ceaf7e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.298244 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8a7e5f75-0918-4179-ba63-0daa9ceaf7e5-host-run-ovn-kubernetes\") pod \"ovnkube-node-lm5vc\" (UID: \"8a7e5f75-0918-4179-ba63-0daa9ceaf7e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.298275 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8a7e5f75-0918-4179-ba63-0daa9ceaf7e5-node-log\") pod \"ovnkube-node-lm5vc\" (UID: \"8a7e5f75-0918-4179-ba63-0daa9ceaf7e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.298276 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8a7e5f75-0918-4179-ba63-0daa9ceaf7e5-host-run-netns\") pod \"ovnkube-node-lm5vc\" (UID: \"8a7e5f75-0918-4179-ba63-0daa9ceaf7e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.298570 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8a7e5f75-0918-4179-ba63-0daa9ceaf7e5-env-overrides\") pod \"ovnkube-node-lm5vc\" (UID: \"8a7e5f75-0918-4179-ba63-0daa9ceaf7e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.302615 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8a7e5f75-0918-4179-ba63-0daa9ceaf7e5-ovn-node-metrics-cert\") pod \"ovnkube-node-lm5vc\" (UID: \"8a7e5f75-0918-4179-ba63-0daa9ceaf7e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.320635 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9thf8\" (UniqueName: \"kubernetes.io/projected/8a7e5f75-0918-4179-ba63-0daa9ceaf7e5-kube-api-access-9thf8\") pod \"ovnkube-node-lm5vc\" (UID: \"8a7e5f75-0918-4179-ba63-0daa9ceaf7e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.348517 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.376127 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wlt4c_a3b8d3ca-f768-4129-9c1a-b4866dd852d4/kube-multus/2.log" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.381825 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7r9gt_72c5918a-056f-446c-b138-a1be7140a5b0/ovnkube-controller/3.log" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.383616 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7r9gt_72c5918a-056f-446c-b138-a1be7140a5b0/ovn-acl-logging/0.log" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.384692 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7r9gt_72c5918a-056f-446c-b138-a1be7140a5b0/ovn-controller/0.log" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.385524 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.385516 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" event={"ID":"72c5918a-056f-446c-b138-a1be7140a5b0","Type":"ContainerDied","Data":"a2614c0cf9a6f0902ef7c50122eea59027e4caf925e68609637d981b8a9837a9"} Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.385602 4833 scope.go:117] "RemoveContainer" containerID="a2614c0cf9a6f0902ef7c50122eea59027e4caf925e68609637d981b8a9837a9" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.396082 4833 generic.go:334] "Generic (PLEG): container finished" podID="72c5918a-056f-446c-b138-a1be7140a5b0" containerID="a2614c0cf9a6f0902ef7c50122eea59027e4caf925e68609637d981b8a9837a9" exitCode=0 Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.396143 4833 generic.go:334] "Generic (PLEG): container finished" podID="72c5918a-056f-446c-b138-a1be7140a5b0" containerID="8fdc1a29a9f5b3be51beb5d28d7a09f671e4f9effeb9ed122da196da8623abe2" exitCode=0 Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.396156 4833 generic.go:334] "Generic (PLEG): container finished" podID="72c5918a-056f-446c-b138-a1be7140a5b0" containerID="f5c74f469d2d07cfd265ee1f625b957efc0f9d1f0efd5f483c962abdfd66a080" exitCode=0 Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.396165 4833 generic.go:334] "Generic (PLEG): container finished" podID="72c5918a-056f-446c-b138-a1be7140a5b0" containerID="4ee6e033fa9b29f59b186e39ec83d162b40e36aab28bb1e568a54c351a8c0c79" exitCode=0 Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.396191 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" event={"ID":"72c5918a-056f-446c-b138-a1be7140a5b0","Type":"ContainerDied","Data":"8fdc1a29a9f5b3be51beb5d28d7a09f671e4f9effeb9ed122da196da8623abe2"} Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.396226 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" event={"ID":"72c5918a-056f-446c-b138-a1be7140a5b0","Type":"ContainerDied","Data":"f5c74f469d2d07cfd265ee1f625b957efc0f9d1f0efd5f483c962abdfd66a080"} Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.396241 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" event={"ID":"72c5918a-056f-446c-b138-a1be7140a5b0","Type":"ContainerDied","Data":"4ee6e033fa9b29f59b186e39ec83d162b40e36aab28bb1e568a54c351a8c0c79"} Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.396254 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7r9gt" event={"ID":"72c5918a-056f-446c-b138-a1be7140a5b0","Type":"ContainerDied","Data":"a4b7843bd85b06f5b3e21637cd2e22f0854ff282374189fb9dc7a310009fcdf6"} Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.446977 4833 scope.go:117] "RemoveContainer" containerID="b53fbee4db34f53cf5d58fa925e793cae75e34d7aeac4b10518659e2b89e177b" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.473896 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-7r9gt"] Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.485459 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-7r9gt"] Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.509205 4833 scope.go:117] "RemoveContainer" containerID="8fdc1a29a9f5b3be51beb5d28d7a09f671e4f9effeb9ed122da196da8623abe2" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.562395 4833 scope.go:117] "RemoveContainer" containerID="f5c74f469d2d07cfd265ee1f625b957efc0f9d1f0efd5f483c962abdfd66a080" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.614236 4833 scope.go:117] "RemoveContainer" containerID="4ee6e033fa9b29f59b186e39ec83d162b40e36aab28bb1e568a54c351a8c0c79" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.630703 4833 scope.go:117] "RemoveContainer" containerID="a89a54d9bedad8afe45f34ce0bc2324d0ca8590c328aad9d1557b2d92e5b81fc" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.646760 4833 scope.go:117] "RemoveContainer" containerID="e0c4e266b444d8ac1a67f9b5a6705f4b871e3930daf7707cd30b589a4cce24ec" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.658664 4833 scope.go:117] "RemoveContainer" containerID="ed9da7ba3cbbca328926f2372a7e581793373ec876c961f7988617c668d958b7" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.669871 4833 scope.go:117] "RemoveContainer" containerID="42e357a961b833cb840c8b33ce0db65d57cc5fb37789b00a6bcd3eb67d8e33f2" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.687030 4833 scope.go:117] "RemoveContainer" containerID="924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.698712 4833 scope.go:117] "RemoveContainer" containerID="a2614c0cf9a6f0902ef7c50122eea59027e4caf925e68609637d981b8a9837a9" Feb 17 13:56:10 crc kubenswrapper[4833]: E0217 13:56:10.699123 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2614c0cf9a6f0902ef7c50122eea59027e4caf925e68609637d981b8a9837a9\": container with ID starting with a2614c0cf9a6f0902ef7c50122eea59027e4caf925e68609637d981b8a9837a9 not found: ID does not exist" containerID="a2614c0cf9a6f0902ef7c50122eea59027e4caf925e68609637d981b8a9837a9" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.699149 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2614c0cf9a6f0902ef7c50122eea59027e4caf925e68609637d981b8a9837a9"} err="failed to get container status \"a2614c0cf9a6f0902ef7c50122eea59027e4caf925e68609637d981b8a9837a9\": rpc error: code = NotFound desc = could not find container \"a2614c0cf9a6f0902ef7c50122eea59027e4caf925e68609637d981b8a9837a9\": container with ID starting with a2614c0cf9a6f0902ef7c50122eea59027e4caf925e68609637d981b8a9837a9 not found: ID does not exist" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.699168 4833 scope.go:117] "RemoveContainer" containerID="b53fbee4db34f53cf5d58fa925e793cae75e34d7aeac4b10518659e2b89e177b" Feb 17 13:56:10 crc kubenswrapper[4833]: E0217 13:56:10.699483 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b53fbee4db34f53cf5d58fa925e793cae75e34d7aeac4b10518659e2b89e177b\": container with ID starting with b53fbee4db34f53cf5d58fa925e793cae75e34d7aeac4b10518659e2b89e177b not found: ID does not exist" containerID="b53fbee4db34f53cf5d58fa925e793cae75e34d7aeac4b10518659e2b89e177b" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.699525 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b53fbee4db34f53cf5d58fa925e793cae75e34d7aeac4b10518659e2b89e177b"} err="failed to get container status \"b53fbee4db34f53cf5d58fa925e793cae75e34d7aeac4b10518659e2b89e177b\": rpc error: code = NotFound desc = could not find container \"b53fbee4db34f53cf5d58fa925e793cae75e34d7aeac4b10518659e2b89e177b\": container with ID starting with b53fbee4db34f53cf5d58fa925e793cae75e34d7aeac4b10518659e2b89e177b not found: ID does not exist" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.699543 4833 scope.go:117] "RemoveContainer" containerID="8fdc1a29a9f5b3be51beb5d28d7a09f671e4f9effeb9ed122da196da8623abe2" Feb 17 13:56:10 crc kubenswrapper[4833]: E0217 13:56:10.699774 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fdc1a29a9f5b3be51beb5d28d7a09f671e4f9effeb9ed122da196da8623abe2\": container with ID starting with 8fdc1a29a9f5b3be51beb5d28d7a09f671e4f9effeb9ed122da196da8623abe2 not found: ID does not exist" containerID="8fdc1a29a9f5b3be51beb5d28d7a09f671e4f9effeb9ed122da196da8623abe2" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.699792 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fdc1a29a9f5b3be51beb5d28d7a09f671e4f9effeb9ed122da196da8623abe2"} err="failed to get container status \"8fdc1a29a9f5b3be51beb5d28d7a09f671e4f9effeb9ed122da196da8623abe2\": rpc error: code = NotFound desc = could not find container \"8fdc1a29a9f5b3be51beb5d28d7a09f671e4f9effeb9ed122da196da8623abe2\": container with ID starting with 8fdc1a29a9f5b3be51beb5d28d7a09f671e4f9effeb9ed122da196da8623abe2 not found: ID does not exist" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.699826 4833 scope.go:117] "RemoveContainer" containerID="f5c74f469d2d07cfd265ee1f625b957efc0f9d1f0efd5f483c962abdfd66a080" Feb 17 13:56:10 crc kubenswrapper[4833]: E0217 13:56:10.700237 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5c74f469d2d07cfd265ee1f625b957efc0f9d1f0efd5f483c962abdfd66a080\": container with ID starting with f5c74f469d2d07cfd265ee1f625b957efc0f9d1f0efd5f483c962abdfd66a080 not found: ID does not exist" containerID="f5c74f469d2d07cfd265ee1f625b957efc0f9d1f0efd5f483c962abdfd66a080" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.700277 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5c74f469d2d07cfd265ee1f625b957efc0f9d1f0efd5f483c962abdfd66a080"} err="failed to get container status \"f5c74f469d2d07cfd265ee1f625b957efc0f9d1f0efd5f483c962abdfd66a080\": rpc error: code = NotFound desc = could not find container \"f5c74f469d2d07cfd265ee1f625b957efc0f9d1f0efd5f483c962abdfd66a080\": container with ID starting with f5c74f469d2d07cfd265ee1f625b957efc0f9d1f0efd5f483c962abdfd66a080 not found: ID does not exist" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.700296 4833 scope.go:117] "RemoveContainer" containerID="4ee6e033fa9b29f59b186e39ec83d162b40e36aab28bb1e568a54c351a8c0c79" Feb 17 13:56:10 crc kubenswrapper[4833]: E0217 13:56:10.700511 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ee6e033fa9b29f59b186e39ec83d162b40e36aab28bb1e568a54c351a8c0c79\": container with ID starting with 4ee6e033fa9b29f59b186e39ec83d162b40e36aab28bb1e568a54c351a8c0c79 not found: ID does not exist" containerID="4ee6e033fa9b29f59b186e39ec83d162b40e36aab28bb1e568a54c351a8c0c79" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.700537 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ee6e033fa9b29f59b186e39ec83d162b40e36aab28bb1e568a54c351a8c0c79"} err="failed to get container status \"4ee6e033fa9b29f59b186e39ec83d162b40e36aab28bb1e568a54c351a8c0c79\": rpc error: code = NotFound desc = could not find container \"4ee6e033fa9b29f59b186e39ec83d162b40e36aab28bb1e568a54c351a8c0c79\": container with ID starting with 4ee6e033fa9b29f59b186e39ec83d162b40e36aab28bb1e568a54c351a8c0c79 not found: ID does not exist" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.700550 4833 scope.go:117] "RemoveContainer" containerID="a89a54d9bedad8afe45f34ce0bc2324d0ca8590c328aad9d1557b2d92e5b81fc" Feb 17 13:56:10 crc kubenswrapper[4833]: E0217 13:56:10.700795 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a89a54d9bedad8afe45f34ce0bc2324d0ca8590c328aad9d1557b2d92e5b81fc\": container with ID starting with a89a54d9bedad8afe45f34ce0bc2324d0ca8590c328aad9d1557b2d92e5b81fc not found: ID does not exist" containerID="a89a54d9bedad8afe45f34ce0bc2324d0ca8590c328aad9d1557b2d92e5b81fc" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.700841 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a89a54d9bedad8afe45f34ce0bc2324d0ca8590c328aad9d1557b2d92e5b81fc"} err="failed to get container status \"a89a54d9bedad8afe45f34ce0bc2324d0ca8590c328aad9d1557b2d92e5b81fc\": rpc error: code = NotFound desc = could not find container \"a89a54d9bedad8afe45f34ce0bc2324d0ca8590c328aad9d1557b2d92e5b81fc\": container with ID starting with a89a54d9bedad8afe45f34ce0bc2324d0ca8590c328aad9d1557b2d92e5b81fc not found: ID does not exist" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.700872 4833 scope.go:117] "RemoveContainer" containerID="e0c4e266b444d8ac1a67f9b5a6705f4b871e3930daf7707cd30b589a4cce24ec" Feb 17 13:56:10 crc kubenswrapper[4833]: E0217 13:56:10.701285 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0c4e266b444d8ac1a67f9b5a6705f4b871e3930daf7707cd30b589a4cce24ec\": container with ID starting with e0c4e266b444d8ac1a67f9b5a6705f4b871e3930daf7707cd30b589a4cce24ec not found: ID does not exist" containerID="e0c4e266b444d8ac1a67f9b5a6705f4b871e3930daf7707cd30b589a4cce24ec" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.701311 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0c4e266b444d8ac1a67f9b5a6705f4b871e3930daf7707cd30b589a4cce24ec"} err="failed to get container status \"e0c4e266b444d8ac1a67f9b5a6705f4b871e3930daf7707cd30b589a4cce24ec\": rpc error: code = NotFound desc = could not find container \"e0c4e266b444d8ac1a67f9b5a6705f4b871e3930daf7707cd30b589a4cce24ec\": container with ID starting with e0c4e266b444d8ac1a67f9b5a6705f4b871e3930daf7707cd30b589a4cce24ec not found: ID does not exist" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.701500 4833 scope.go:117] "RemoveContainer" containerID="ed9da7ba3cbbca328926f2372a7e581793373ec876c961f7988617c668d958b7" Feb 17 13:56:10 crc kubenswrapper[4833]: E0217 13:56:10.701750 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed9da7ba3cbbca328926f2372a7e581793373ec876c961f7988617c668d958b7\": container with ID starting with ed9da7ba3cbbca328926f2372a7e581793373ec876c961f7988617c668d958b7 not found: ID does not exist" containerID="ed9da7ba3cbbca328926f2372a7e581793373ec876c961f7988617c668d958b7" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.701773 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed9da7ba3cbbca328926f2372a7e581793373ec876c961f7988617c668d958b7"} err="failed to get container status \"ed9da7ba3cbbca328926f2372a7e581793373ec876c961f7988617c668d958b7\": rpc error: code = NotFound desc = could not find container \"ed9da7ba3cbbca328926f2372a7e581793373ec876c961f7988617c668d958b7\": container with ID starting with ed9da7ba3cbbca328926f2372a7e581793373ec876c961f7988617c668d958b7 not found: ID does not exist" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.701790 4833 scope.go:117] "RemoveContainer" containerID="42e357a961b833cb840c8b33ce0db65d57cc5fb37789b00a6bcd3eb67d8e33f2" Feb 17 13:56:10 crc kubenswrapper[4833]: E0217 13:56:10.702788 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42e357a961b833cb840c8b33ce0db65d57cc5fb37789b00a6bcd3eb67d8e33f2\": container with ID starting with 42e357a961b833cb840c8b33ce0db65d57cc5fb37789b00a6bcd3eb67d8e33f2 not found: ID does not exist" containerID="42e357a961b833cb840c8b33ce0db65d57cc5fb37789b00a6bcd3eb67d8e33f2" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.702815 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42e357a961b833cb840c8b33ce0db65d57cc5fb37789b00a6bcd3eb67d8e33f2"} err="failed to get container status \"42e357a961b833cb840c8b33ce0db65d57cc5fb37789b00a6bcd3eb67d8e33f2\": rpc error: code = NotFound desc = could not find container \"42e357a961b833cb840c8b33ce0db65d57cc5fb37789b00a6bcd3eb67d8e33f2\": container with ID starting with 42e357a961b833cb840c8b33ce0db65d57cc5fb37789b00a6bcd3eb67d8e33f2 not found: ID does not exist" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.702833 4833 scope.go:117] "RemoveContainer" containerID="924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83" Feb 17 13:56:10 crc kubenswrapper[4833]: E0217 13:56:10.703113 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\": container with ID starting with 924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83 not found: ID does not exist" containerID="924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.703143 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83"} err="failed to get container status \"924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\": rpc error: code = NotFound desc = could not find container \"924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\": container with ID starting with 924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83 not found: ID does not exist" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.703159 4833 scope.go:117] "RemoveContainer" containerID="a2614c0cf9a6f0902ef7c50122eea59027e4caf925e68609637d981b8a9837a9" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.703457 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2614c0cf9a6f0902ef7c50122eea59027e4caf925e68609637d981b8a9837a9"} err="failed to get container status \"a2614c0cf9a6f0902ef7c50122eea59027e4caf925e68609637d981b8a9837a9\": rpc error: code = NotFound desc = could not find container \"a2614c0cf9a6f0902ef7c50122eea59027e4caf925e68609637d981b8a9837a9\": container with ID starting with a2614c0cf9a6f0902ef7c50122eea59027e4caf925e68609637d981b8a9837a9 not found: ID does not exist" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.703491 4833 scope.go:117] "RemoveContainer" containerID="b53fbee4db34f53cf5d58fa925e793cae75e34d7aeac4b10518659e2b89e177b" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.703756 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b53fbee4db34f53cf5d58fa925e793cae75e34d7aeac4b10518659e2b89e177b"} err="failed to get container status \"b53fbee4db34f53cf5d58fa925e793cae75e34d7aeac4b10518659e2b89e177b\": rpc error: code = NotFound desc = could not find container \"b53fbee4db34f53cf5d58fa925e793cae75e34d7aeac4b10518659e2b89e177b\": container with ID starting with b53fbee4db34f53cf5d58fa925e793cae75e34d7aeac4b10518659e2b89e177b not found: ID does not exist" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.703779 4833 scope.go:117] "RemoveContainer" containerID="8fdc1a29a9f5b3be51beb5d28d7a09f671e4f9effeb9ed122da196da8623abe2" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.704017 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fdc1a29a9f5b3be51beb5d28d7a09f671e4f9effeb9ed122da196da8623abe2"} err="failed to get container status \"8fdc1a29a9f5b3be51beb5d28d7a09f671e4f9effeb9ed122da196da8623abe2\": rpc error: code = NotFound desc = could not find container \"8fdc1a29a9f5b3be51beb5d28d7a09f671e4f9effeb9ed122da196da8623abe2\": container with ID starting with 8fdc1a29a9f5b3be51beb5d28d7a09f671e4f9effeb9ed122da196da8623abe2 not found: ID does not exist" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.704047 4833 scope.go:117] "RemoveContainer" containerID="f5c74f469d2d07cfd265ee1f625b957efc0f9d1f0efd5f483c962abdfd66a080" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.704271 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5c74f469d2d07cfd265ee1f625b957efc0f9d1f0efd5f483c962abdfd66a080"} err="failed to get container status \"f5c74f469d2d07cfd265ee1f625b957efc0f9d1f0efd5f483c962abdfd66a080\": rpc error: code = NotFound desc = could not find container \"f5c74f469d2d07cfd265ee1f625b957efc0f9d1f0efd5f483c962abdfd66a080\": container with ID starting with f5c74f469d2d07cfd265ee1f625b957efc0f9d1f0efd5f483c962abdfd66a080 not found: ID does not exist" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.704290 4833 scope.go:117] "RemoveContainer" containerID="4ee6e033fa9b29f59b186e39ec83d162b40e36aab28bb1e568a54c351a8c0c79" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.704696 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ee6e033fa9b29f59b186e39ec83d162b40e36aab28bb1e568a54c351a8c0c79"} err="failed to get container status \"4ee6e033fa9b29f59b186e39ec83d162b40e36aab28bb1e568a54c351a8c0c79\": rpc error: code = NotFound desc = could not find container \"4ee6e033fa9b29f59b186e39ec83d162b40e36aab28bb1e568a54c351a8c0c79\": container with ID starting with 4ee6e033fa9b29f59b186e39ec83d162b40e36aab28bb1e568a54c351a8c0c79 not found: ID does not exist" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.704741 4833 scope.go:117] "RemoveContainer" containerID="a89a54d9bedad8afe45f34ce0bc2324d0ca8590c328aad9d1557b2d92e5b81fc" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.705062 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a89a54d9bedad8afe45f34ce0bc2324d0ca8590c328aad9d1557b2d92e5b81fc"} err="failed to get container status \"a89a54d9bedad8afe45f34ce0bc2324d0ca8590c328aad9d1557b2d92e5b81fc\": rpc error: code = NotFound desc = could not find container \"a89a54d9bedad8afe45f34ce0bc2324d0ca8590c328aad9d1557b2d92e5b81fc\": container with ID starting with a89a54d9bedad8afe45f34ce0bc2324d0ca8590c328aad9d1557b2d92e5b81fc not found: ID does not exist" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.705085 4833 scope.go:117] "RemoveContainer" containerID="e0c4e266b444d8ac1a67f9b5a6705f4b871e3930daf7707cd30b589a4cce24ec" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.705366 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0c4e266b444d8ac1a67f9b5a6705f4b871e3930daf7707cd30b589a4cce24ec"} err="failed to get container status \"e0c4e266b444d8ac1a67f9b5a6705f4b871e3930daf7707cd30b589a4cce24ec\": rpc error: code = NotFound desc = could not find container \"e0c4e266b444d8ac1a67f9b5a6705f4b871e3930daf7707cd30b589a4cce24ec\": container with ID starting with e0c4e266b444d8ac1a67f9b5a6705f4b871e3930daf7707cd30b589a4cce24ec not found: ID does not exist" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.705394 4833 scope.go:117] "RemoveContainer" containerID="ed9da7ba3cbbca328926f2372a7e581793373ec876c961f7988617c668d958b7" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.705635 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed9da7ba3cbbca328926f2372a7e581793373ec876c961f7988617c668d958b7"} err="failed to get container status \"ed9da7ba3cbbca328926f2372a7e581793373ec876c961f7988617c668d958b7\": rpc error: code = NotFound desc = could not find container \"ed9da7ba3cbbca328926f2372a7e581793373ec876c961f7988617c668d958b7\": container with ID starting with ed9da7ba3cbbca328926f2372a7e581793373ec876c961f7988617c668d958b7 not found: ID does not exist" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.705657 4833 scope.go:117] "RemoveContainer" containerID="42e357a961b833cb840c8b33ce0db65d57cc5fb37789b00a6bcd3eb67d8e33f2" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.705939 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42e357a961b833cb840c8b33ce0db65d57cc5fb37789b00a6bcd3eb67d8e33f2"} err="failed to get container status \"42e357a961b833cb840c8b33ce0db65d57cc5fb37789b00a6bcd3eb67d8e33f2\": rpc error: code = NotFound desc = could not find container \"42e357a961b833cb840c8b33ce0db65d57cc5fb37789b00a6bcd3eb67d8e33f2\": container with ID starting with 42e357a961b833cb840c8b33ce0db65d57cc5fb37789b00a6bcd3eb67d8e33f2 not found: ID does not exist" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.705968 4833 scope.go:117] "RemoveContainer" containerID="924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.706265 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83"} err="failed to get container status \"924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\": rpc error: code = NotFound desc = could not find container \"924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\": container with ID starting with 924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83 not found: ID does not exist" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.706289 4833 scope.go:117] "RemoveContainer" containerID="a2614c0cf9a6f0902ef7c50122eea59027e4caf925e68609637d981b8a9837a9" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.706540 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2614c0cf9a6f0902ef7c50122eea59027e4caf925e68609637d981b8a9837a9"} err="failed to get container status \"a2614c0cf9a6f0902ef7c50122eea59027e4caf925e68609637d981b8a9837a9\": rpc error: code = NotFound desc = could not find container \"a2614c0cf9a6f0902ef7c50122eea59027e4caf925e68609637d981b8a9837a9\": container with ID starting with a2614c0cf9a6f0902ef7c50122eea59027e4caf925e68609637d981b8a9837a9 not found: ID does not exist" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.706569 4833 scope.go:117] "RemoveContainer" containerID="b53fbee4db34f53cf5d58fa925e793cae75e34d7aeac4b10518659e2b89e177b" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.706839 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b53fbee4db34f53cf5d58fa925e793cae75e34d7aeac4b10518659e2b89e177b"} err="failed to get container status \"b53fbee4db34f53cf5d58fa925e793cae75e34d7aeac4b10518659e2b89e177b\": rpc error: code = NotFound desc = could not find container \"b53fbee4db34f53cf5d58fa925e793cae75e34d7aeac4b10518659e2b89e177b\": container with ID starting with b53fbee4db34f53cf5d58fa925e793cae75e34d7aeac4b10518659e2b89e177b not found: ID does not exist" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.706867 4833 scope.go:117] "RemoveContainer" containerID="8fdc1a29a9f5b3be51beb5d28d7a09f671e4f9effeb9ed122da196da8623abe2" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.707132 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fdc1a29a9f5b3be51beb5d28d7a09f671e4f9effeb9ed122da196da8623abe2"} err="failed to get container status \"8fdc1a29a9f5b3be51beb5d28d7a09f671e4f9effeb9ed122da196da8623abe2\": rpc error: code = NotFound desc = could not find container \"8fdc1a29a9f5b3be51beb5d28d7a09f671e4f9effeb9ed122da196da8623abe2\": container with ID starting with 8fdc1a29a9f5b3be51beb5d28d7a09f671e4f9effeb9ed122da196da8623abe2 not found: ID does not exist" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.707151 4833 scope.go:117] "RemoveContainer" containerID="f5c74f469d2d07cfd265ee1f625b957efc0f9d1f0efd5f483c962abdfd66a080" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.707412 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5c74f469d2d07cfd265ee1f625b957efc0f9d1f0efd5f483c962abdfd66a080"} err="failed to get container status \"f5c74f469d2d07cfd265ee1f625b957efc0f9d1f0efd5f483c962abdfd66a080\": rpc error: code = NotFound desc = could not find container \"f5c74f469d2d07cfd265ee1f625b957efc0f9d1f0efd5f483c962abdfd66a080\": container with ID starting with f5c74f469d2d07cfd265ee1f625b957efc0f9d1f0efd5f483c962abdfd66a080 not found: ID does not exist" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.707431 4833 scope.go:117] "RemoveContainer" containerID="4ee6e033fa9b29f59b186e39ec83d162b40e36aab28bb1e568a54c351a8c0c79" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.707663 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ee6e033fa9b29f59b186e39ec83d162b40e36aab28bb1e568a54c351a8c0c79"} err="failed to get container status \"4ee6e033fa9b29f59b186e39ec83d162b40e36aab28bb1e568a54c351a8c0c79\": rpc error: code = NotFound desc = could not find container \"4ee6e033fa9b29f59b186e39ec83d162b40e36aab28bb1e568a54c351a8c0c79\": container with ID starting with 4ee6e033fa9b29f59b186e39ec83d162b40e36aab28bb1e568a54c351a8c0c79 not found: ID does not exist" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.707691 4833 scope.go:117] "RemoveContainer" containerID="a89a54d9bedad8afe45f34ce0bc2324d0ca8590c328aad9d1557b2d92e5b81fc" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.708062 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a89a54d9bedad8afe45f34ce0bc2324d0ca8590c328aad9d1557b2d92e5b81fc"} err="failed to get container status \"a89a54d9bedad8afe45f34ce0bc2324d0ca8590c328aad9d1557b2d92e5b81fc\": rpc error: code = NotFound desc = could not find container \"a89a54d9bedad8afe45f34ce0bc2324d0ca8590c328aad9d1557b2d92e5b81fc\": container with ID starting with a89a54d9bedad8afe45f34ce0bc2324d0ca8590c328aad9d1557b2d92e5b81fc not found: ID does not exist" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.708090 4833 scope.go:117] "RemoveContainer" containerID="e0c4e266b444d8ac1a67f9b5a6705f4b871e3930daf7707cd30b589a4cce24ec" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.708411 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0c4e266b444d8ac1a67f9b5a6705f4b871e3930daf7707cd30b589a4cce24ec"} err="failed to get container status \"e0c4e266b444d8ac1a67f9b5a6705f4b871e3930daf7707cd30b589a4cce24ec\": rpc error: code = NotFound desc = could not find container \"e0c4e266b444d8ac1a67f9b5a6705f4b871e3930daf7707cd30b589a4cce24ec\": container with ID starting with e0c4e266b444d8ac1a67f9b5a6705f4b871e3930daf7707cd30b589a4cce24ec not found: ID does not exist" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.708430 4833 scope.go:117] "RemoveContainer" containerID="ed9da7ba3cbbca328926f2372a7e581793373ec876c961f7988617c668d958b7" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.708751 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed9da7ba3cbbca328926f2372a7e581793373ec876c961f7988617c668d958b7"} err="failed to get container status \"ed9da7ba3cbbca328926f2372a7e581793373ec876c961f7988617c668d958b7\": rpc error: code = NotFound desc = could not find container \"ed9da7ba3cbbca328926f2372a7e581793373ec876c961f7988617c668d958b7\": container with ID starting with ed9da7ba3cbbca328926f2372a7e581793373ec876c961f7988617c668d958b7 not found: ID does not exist" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.708771 4833 scope.go:117] "RemoveContainer" containerID="42e357a961b833cb840c8b33ce0db65d57cc5fb37789b00a6bcd3eb67d8e33f2" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.709015 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42e357a961b833cb840c8b33ce0db65d57cc5fb37789b00a6bcd3eb67d8e33f2"} err="failed to get container status \"42e357a961b833cb840c8b33ce0db65d57cc5fb37789b00a6bcd3eb67d8e33f2\": rpc error: code = NotFound desc = could not find container \"42e357a961b833cb840c8b33ce0db65d57cc5fb37789b00a6bcd3eb67d8e33f2\": container with ID starting with 42e357a961b833cb840c8b33ce0db65d57cc5fb37789b00a6bcd3eb67d8e33f2 not found: ID does not exist" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.709056 4833 scope.go:117] "RemoveContainer" containerID="924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.709338 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83"} err="failed to get container status \"924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\": rpc error: code = NotFound desc = could not find container \"924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\": container with ID starting with 924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83 not found: ID does not exist" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.709361 4833 scope.go:117] "RemoveContainer" containerID="a2614c0cf9a6f0902ef7c50122eea59027e4caf925e68609637d981b8a9837a9" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.709611 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2614c0cf9a6f0902ef7c50122eea59027e4caf925e68609637d981b8a9837a9"} err="failed to get container status \"a2614c0cf9a6f0902ef7c50122eea59027e4caf925e68609637d981b8a9837a9\": rpc error: code = NotFound desc = could not find container \"a2614c0cf9a6f0902ef7c50122eea59027e4caf925e68609637d981b8a9837a9\": container with ID starting with a2614c0cf9a6f0902ef7c50122eea59027e4caf925e68609637d981b8a9837a9 not found: ID does not exist" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.709635 4833 scope.go:117] "RemoveContainer" containerID="b53fbee4db34f53cf5d58fa925e793cae75e34d7aeac4b10518659e2b89e177b" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.709927 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b53fbee4db34f53cf5d58fa925e793cae75e34d7aeac4b10518659e2b89e177b"} err="failed to get container status \"b53fbee4db34f53cf5d58fa925e793cae75e34d7aeac4b10518659e2b89e177b\": rpc error: code = NotFound desc = could not find container \"b53fbee4db34f53cf5d58fa925e793cae75e34d7aeac4b10518659e2b89e177b\": container with ID starting with b53fbee4db34f53cf5d58fa925e793cae75e34d7aeac4b10518659e2b89e177b not found: ID does not exist" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.709955 4833 scope.go:117] "RemoveContainer" containerID="8fdc1a29a9f5b3be51beb5d28d7a09f671e4f9effeb9ed122da196da8623abe2" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.710234 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fdc1a29a9f5b3be51beb5d28d7a09f671e4f9effeb9ed122da196da8623abe2"} err="failed to get container status \"8fdc1a29a9f5b3be51beb5d28d7a09f671e4f9effeb9ed122da196da8623abe2\": rpc error: code = NotFound desc = could not find container \"8fdc1a29a9f5b3be51beb5d28d7a09f671e4f9effeb9ed122da196da8623abe2\": container with ID starting with 8fdc1a29a9f5b3be51beb5d28d7a09f671e4f9effeb9ed122da196da8623abe2 not found: ID does not exist" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.710260 4833 scope.go:117] "RemoveContainer" containerID="f5c74f469d2d07cfd265ee1f625b957efc0f9d1f0efd5f483c962abdfd66a080" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.710465 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5c74f469d2d07cfd265ee1f625b957efc0f9d1f0efd5f483c962abdfd66a080"} err="failed to get container status \"f5c74f469d2d07cfd265ee1f625b957efc0f9d1f0efd5f483c962abdfd66a080\": rpc error: code = NotFound desc = could not find container \"f5c74f469d2d07cfd265ee1f625b957efc0f9d1f0efd5f483c962abdfd66a080\": container with ID starting with f5c74f469d2d07cfd265ee1f625b957efc0f9d1f0efd5f483c962abdfd66a080 not found: ID does not exist" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.710486 4833 scope.go:117] "RemoveContainer" containerID="4ee6e033fa9b29f59b186e39ec83d162b40e36aab28bb1e568a54c351a8c0c79" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.710780 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ee6e033fa9b29f59b186e39ec83d162b40e36aab28bb1e568a54c351a8c0c79"} err="failed to get container status \"4ee6e033fa9b29f59b186e39ec83d162b40e36aab28bb1e568a54c351a8c0c79\": rpc error: code = NotFound desc = could not find container \"4ee6e033fa9b29f59b186e39ec83d162b40e36aab28bb1e568a54c351a8c0c79\": container with ID starting with 4ee6e033fa9b29f59b186e39ec83d162b40e36aab28bb1e568a54c351a8c0c79 not found: ID does not exist" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.710802 4833 scope.go:117] "RemoveContainer" containerID="a89a54d9bedad8afe45f34ce0bc2324d0ca8590c328aad9d1557b2d92e5b81fc" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.711085 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a89a54d9bedad8afe45f34ce0bc2324d0ca8590c328aad9d1557b2d92e5b81fc"} err="failed to get container status \"a89a54d9bedad8afe45f34ce0bc2324d0ca8590c328aad9d1557b2d92e5b81fc\": rpc error: code = NotFound desc = could not find container \"a89a54d9bedad8afe45f34ce0bc2324d0ca8590c328aad9d1557b2d92e5b81fc\": container with ID starting with a89a54d9bedad8afe45f34ce0bc2324d0ca8590c328aad9d1557b2d92e5b81fc not found: ID does not exist" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.711110 4833 scope.go:117] "RemoveContainer" containerID="e0c4e266b444d8ac1a67f9b5a6705f4b871e3930daf7707cd30b589a4cce24ec" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.711351 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0c4e266b444d8ac1a67f9b5a6705f4b871e3930daf7707cd30b589a4cce24ec"} err="failed to get container status \"e0c4e266b444d8ac1a67f9b5a6705f4b871e3930daf7707cd30b589a4cce24ec\": rpc error: code = NotFound desc = could not find container \"e0c4e266b444d8ac1a67f9b5a6705f4b871e3930daf7707cd30b589a4cce24ec\": container with ID starting with e0c4e266b444d8ac1a67f9b5a6705f4b871e3930daf7707cd30b589a4cce24ec not found: ID does not exist" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.711376 4833 scope.go:117] "RemoveContainer" containerID="ed9da7ba3cbbca328926f2372a7e581793373ec876c961f7988617c668d958b7" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.711588 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed9da7ba3cbbca328926f2372a7e581793373ec876c961f7988617c668d958b7"} err="failed to get container status \"ed9da7ba3cbbca328926f2372a7e581793373ec876c961f7988617c668d958b7\": rpc error: code = NotFound desc = could not find container \"ed9da7ba3cbbca328926f2372a7e581793373ec876c961f7988617c668d958b7\": container with ID starting with ed9da7ba3cbbca328926f2372a7e581793373ec876c961f7988617c668d958b7 not found: ID does not exist" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.711626 4833 scope.go:117] "RemoveContainer" containerID="42e357a961b833cb840c8b33ce0db65d57cc5fb37789b00a6bcd3eb67d8e33f2" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.711963 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42e357a961b833cb840c8b33ce0db65d57cc5fb37789b00a6bcd3eb67d8e33f2"} err="failed to get container status \"42e357a961b833cb840c8b33ce0db65d57cc5fb37789b00a6bcd3eb67d8e33f2\": rpc error: code = NotFound desc = could not find container \"42e357a961b833cb840c8b33ce0db65d57cc5fb37789b00a6bcd3eb67d8e33f2\": container with ID starting with 42e357a961b833cb840c8b33ce0db65d57cc5fb37789b00a6bcd3eb67d8e33f2 not found: ID does not exist" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.712012 4833 scope.go:117] "RemoveContainer" containerID="924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83" Feb 17 13:56:10 crc kubenswrapper[4833]: I0217 13:56:10.712240 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83"} err="failed to get container status \"924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\": rpc error: code = NotFound desc = could not find container \"924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83\": container with ID starting with 924d39b320ced85d24156cf347029d79ae5c47643e1db9fec4ea7962ca11bf83 not found: ID does not exist" Feb 17 13:56:11 crc kubenswrapper[4833]: I0217 13:56:11.048067 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72c5918a-056f-446c-b138-a1be7140a5b0" path="/var/lib/kubelet/pods/72c5918a-056f-446c-b138-a1be7140a5b0/volumes" Feb 17 13:56:11 crc kubenswrapper[4833]: I0217 13:56:11.403426 4833 generic.go:334] "Generic (PLEG): container finished" podID="8a7e5f75-0918-4179-ba63-0daa9ceaf7e5" containerID="b43d262f963d2762ed9846567d77805180d67cfe5fd92a1a0a655dab74a9d8e7" exitCode=0 Feb 17 13:56:11 crc kubenswrapper[4833]: I0217 13:56:11.403480 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" event={"ID":"8a7e5f75-0918-4179-ba63-0daa9ceaf7e5","Type":"ContainerDied","Data":"b43d262f963d2762ed9846567d77805180d67cfe5fd92a1a0a655dab74a9d8e7"} Feb 17 13:56:11 crc kubenswrapper[4833]: I0217 13:56:11.403512 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" event={"ID":"8a7e5f75-0918-4179-ba63-0daa9ceaf7e5","Type":"ContainerStarted","Data":"5422ff5fb7885a04eca4536f07d6f22fabd0bdf9e36a6184a2f8f85c6b18d5d3"} Feb 17 13:56:12 crc kubenswrapper[4833]: I0217 13:56:12.410931 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" event={"ID":"8a7e5f75-0918-4179-ba63-0daa9ceaf7e5","Type":"ContainerStarted","Data":"5485c025dcabe1fbde238d34a75b328b14fb3cdf7fbf6647bceca9fc29a6c705"} Feb 17 13:56:12 crc kubenswrapper[4833]: I0217 13:56:12.411367 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" event={"ID":"8a7e5f75-0918-4179-ba63-0daa9ceaf7e5","Type":"ContainerStarted","Data":"55de7d7033b2bb1c06de833ea9f560b3f4020cb10f8184716fe95ebce8b53da6"} Feb 17 13:56:12 crc kubenswrapper[4833]: I0217 13:56:12.411378 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" event={"ID":"8a7e5f75-0918-4179-ba63-0daa9ceaf7e5","Type":"ContainerStarted","Data":"eeff3818005b0fbbe26dbf91c763126287b74f96cf46d6c902e7377144f99304"} Feb 17 13:56:12 crc kubenswrapper[4833]: I0217 13:56:12.411386 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" event={"ID":"8a7e5f75-0918-4179-ba63-0daa9ceaf7e5","Type":"ContainerStarted","Data":"7cb3d933bca8947fb6faf33b5d1ca1cc17e753d9b21a70dae3b196f21a04bdbd"} Feb 17 13:56:12 crc kubenswrapper[4833]: I0217 13:56:12.411394 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" event={"ID":"8a7e5f75-0918-4179-ba63-0daa9ceaf7e5","Type":"ContainerStarted","Data":"a9d2bc81a13b21927c4a15165b686fdf1906164653692e3f4a122717ef0973c9"} Feb 17 13:56:12 crc kubenswrapper[4833]: I0217 13:56:12.411403 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" event={"ID":"8a7e5f75-0918-4179-ba63-0daa9ceaf7e5","Type":"ContainerStarted","Data":"26e3fdbb865f9577eb180810a15d5d2ec1476ccc9b66a48d13cd8f9851e0d85c"} Feb 17 13:56:14 crc kubenswrapper[4833]: I0217 13:56:14.426145 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" event={"ID":"8a7e5f75-0918-4179-ba63-0daa9ceaf7e5","Type":"ContainerStarted","Data":"4fee75c067bfd21883abca0f5c08203ff89520fca0f0704fc6d28a4bfc9ed315"} Feb 17 13:56:16 crc kubenswrapper[4833]: I0217 13:56:16.767781 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-g9798"] Feb 17 13:56:16 crc kubenswrapper[4833]: I0217 13:56:16.768797 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-g9798" Feb 17 13:56:16 crc kubenswrapper[4833]: I0217 13:56:16.771408 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 17 13:56:16 crc kubenswrapper[4833]: I0217 13:56:16.771511 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-pfj2v" Feb 17 13:56:16 crc kubenswrapper[4833]: I0217 13:56:16.771690 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 17 13:56:16 crc kubenswrapper[4833]: I0217 13:56:16.782285 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5c64c46959-jrtt8"] Feb 17 13:56:16 crc kubenswrapper[4833]: I0217 13:56:16.783081 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c64c46959-jrtt8" Feb 17 13:56:16 crc kubenswrapper[4833]: I0217 13:56:16.786787 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-s748x" Feb 17 13:56:16 crc kubenswrapper[4833]: I0217 13:56:16.787012 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 17 13:56:16 crc kubenswrapper[4833]: I0217 13:56:16.810099 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5c64c46959-7xhsf"] Feb 17 13:56:16 crc kubenswrapper[4833]: I0217 13:56:16.810900 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c64c46959-7xhsf" Feb 17 13:56:16 crc kubenswrapper[4833]: I0217 13:56:16.857646 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-hwr99"] Feb 17 13:56:16 crc kubenswrapper[4833]: I0217 13:56:16.858637 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-hwr99" Feb 17 13:56:16 crc kubenswrapper[4833]: I0217 13:56:16.863362 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-zb7sj" Feb 17 13:56:16 crc kubenswrapper[4833]: I0217 13:56:16.863520 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 17 13:56:16 crc kubenswrapper[4833]: I0217 13:56:16.881722 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9dedda0d-335a-43f4-b199-e9da78d5b37b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5c64c46959-jrtt8\" (UID: \"9dedda0d-335a-43f4-b199-e9da78d5b37b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c64c46959-jrtt8" Feb 17 13:56:16 crc kubenswrapper[4833]: I0217 13:56:16.881762 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9dedda0d-335a-43f4-b199-e9da78d5b37b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5c64c46959-jrtt8\" (UID: \"9dedda0d-335a-43f4-b199-e9da78d5b37b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c64c46959-jrtt8" Feb 17 13:56:16 crc kubenswrapper[4833]: I0217 13:56:16.881839 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9wcg\" (UniqueName: \"kubernetes.io/projected/305d6a8c-4999-4fbb-a005-e115897f16c8-kube-api-access-m9wcg\") pod \"obo-prometheus-operator-68bc856cb9-g9798\" (UID: \"305d6a8c-4999-4fbb-a005-e115897f16c8\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-g9798" Feb 17 13:56:16 crc kubenswrapper[4833]: I0217 13:56:16.983440 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9dedda0d-335a-43f4-b199-e9da78d5b37b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5c64c46959-jrtt8\" (UID: \"9dedda0d-335a-43f4-b199-e9da78d5b37b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c64c46959-jrtt8" Feb 17 13:56:16 crc kubenswrapper[4833]: I0217 13:56:16.983477 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9dedda0d-335a-43f4-b199-e9da78d5b37b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5c64c46959-jrtt8\" (UID: \"9dedda0d-335a-43f4-b199-e9da78d5b37b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c64c46959-jrtt8" Feb 17 13:56:16 crc kubenswrapper[4833]: I0217 13:56:16.983517 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/05eabfc9-be7f-4663-8c09-7662798751cb-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5c64c46959-7xhsf\" (UID: \"05eabfc9-be7f-4663-8c09-7662798751cb\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c64c46959-7xhsf" Feb 17 13:56:16 crc kubenswrapper[4833]: I0217 13:56:16.983557 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg8qc\" (UniqueName: \"kubernetes.io/projected/153717ff-fc08-498f-a4ee-bf06ef5866ab-kube-api-access-fg8qc\") pod \"observability-operator-59bdc8b94-hwr99\" (UID: \"153717ff-fc08-498f-a4ee-bf06ef5866ab\") " pod="openshift-operators/observability-operator-59bdc8b94-hwr99" Feb 17 13:56:16 crc kubenswrapper[4833]: I0217 13:56:16.983583 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/05eabfc9-be7f-4663-8c09-7662798751cb-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5c64c46959-7xhsf\" (UID: \"05eabfc9-be7f-4663-8c09-7662798751cb\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c64c46959-7xhsf" Feb 17 13:56:16 crc kubenswrapper[4833]: I0217 13:56:16.983619 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9wcg\" (UniqueName: \"kubernetes.io/projected/305d6a8c-4999-4fbb-a005-e115897f16c8-kube-api-access-m9wcg\") pod \"obo-prometheus-operator-68bc856cb9-g9798\" (UID: \"305d6a8c-4999-4fbb-a005-e115897f16c8\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-g9798" Feb 17 13:56:16 crc kubenswrapper[4833]: I0217 13:56:16.983648 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/153717ff-fc08-498f-a4ee-bf06ef5866ab-observability-operator-tls\") pod \"observability-operator-59bdc8b94-hwr99\" (UID: \"153717ff-fc08-498f-a4ee-bf06ef5866ab\") " pod="openshift-operators/observability-operator-59bdc8b94-hwr99" Feb 17 13:56:16 crc kubenswrapper[4833]: I0217 13:56:16.989391 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9dedda0d-335a-43f4-b199-e9da78d5b37b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5c64c46959-jrtt8\" (UID: \"9dedda0d-335a-43f4-b199-e9da78d5b37b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c64c46959-jrtt8" Feb 17 13:56:16 crc kubenswrapper[4833]: I0217 13:56:16.989402 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9dedda0d-335a-43f4-b199-e9da78d5b37b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5c64c46959-jrtt8\" (UID: \"9dedda0d-335a-43f4-b199-e9da78d5b37b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c64c46959-jrtt8" Feb 17 13:56:17 crc kubenswrapper[4833]: I0217 13:56:17.009690 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-ckvt2"] Feb 17 13:56:17 crc kubenswrapper[4833]: I0217 13:56:17.010251 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9wcg\" (UniqueName: \"kubernetes.io/projected/305d6a8c-4999-4fbb-a005-e115897f16c8-kube-api-access-m9wcg\") pod \"obo-prometheus-operator-68bc856cb9-g9798\" (UID: \"305d6a8c-4999-4fbb-a005-e115897f16c8\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-g9798" Feb 17 13:56:17 crc kubenswrapper[4833]: I0217 13:56:17.010326 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-ckvt2" Feb 17 13:56:17 crc kubenswrapper[4833]: I0217 13:56:17.021006 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-cqzg5" Feb 17 13:56:17 crc kubenswrapper[4833]: I0217 13:56:17.085211 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fg8qc\" (UniqueName: \"kubernetes.io/projected/153717ff-fc08-498f-a4ee-bf06ef5866ab-kube-api-access-fg8qc\") pod \"observability-operator-59bdc8b94-hwr99\" (UID: \"153717ff-fc08-498f-a4ee-bf06ef5866ab\") " pod="openshift-operators/observability-operator-59bdc8b94-hwr99" Feb 17 13:56:17 crc kubenswrapper[4833]: I0217 13:56:17.085265 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/05eabfc9-be7f-4663-8c09-7662798751cb-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5c64c46959-7xhsf\" (UID: \"05eabfc9-be7f-4663-8c09-7662798751cb\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c64c46959-7xhsf" Feb 17 13:56:17 crc kubenswrapper[4833]: I0217 13:56:17.085301 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/153717ff-fc08-498f-a4ee-bf06ef5866ab-observability-operator-tls\") pod \"observability-operator-59bdc8b94-hwr99\" (UID: \"153717ff-fc08-498f-a4ee-bf06ef5866ab\") " pod="openshift-operators/observability-operator-59bdc8b94-hwr99" Feb 17 13:56:17 crc kubenswrapper[4833]: I0217 13:56:17.085352 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/05eabfc9-be7f-4663-8c09-7662798751cb-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5c64c46959-7xhsf\" (UID: \"05eabfc9-be7f-4663-8c09-7662798751cb\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c64c46959-7xhsf" Feb 17 13:56:17 crc kubenswrapper[4833]: I0217 13:56:17.088479 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/153717ff-fc08-498f-a4ee-bf06ef5866ab-observability-operator-tls\") pod \"observability-operator-59bdc8b94-hwr99\" (UID: \"153717ff-fc08-498f-a4ee-bf06ef5866ab\") " pod="openshift-operators/observability-operator-59bdc8b94-hwr99" Feb 17 13:56:17 crc kubenswrapper[4833]: I0217 13:56:17.088947 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/05eabfc9-be7f-4663-8c09-7662798751cb-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5c64c46959-7xhsf\" (UID: \"05eabfc9-be7f-4663-8c09-7662798751cb\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c64c46959-7xhsf" Feb 17 13:56:17 crc kubenswrapper[4833]: I0217 13:56:17.090196 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/05eabfc9-be7f-4663-8c09-7662798751cb-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5c64c46959-7xhsf\" (UID: \"05eabfc9-be7f-4663-8c09-7662798751cb\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c64c46959-7xhsf" Feb 17 13:56:17 crc kubenswrapper[4833]: I0217 13:56:17.093094 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-g9798" Feb 17 13:56:17 crc kubenswrapper[4833]: I0217 13:56:17.103690 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fg8qc\" (UniqueName: \"kubernetes.io/projected/153717ff-fc08-498f-a4ee-bf06ef5866ab-kube-api-access-fg8qc\") pod \"observability-operator-59bdc8b94-hwr99\" (UID: \"153717ff-fc08-498f-a4ee-bf06ef5866ab\") " pod="openshift-operators/observability-operator-59bdc8b94-hwr99" Feb 17 13:56:17 crc kubenswrapper[4833]: I0217 13:56:17.106690 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c64c46959-jrtt8" Feb 17 13:56:17 crc kubenswrapper[4833]: E0217 13:56:17.122460 4833 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-g9798_openshift-operators_305d6a8c-4999-4fbb-a005-e115897f16c8_0(779a2d0c7910905a3476ec5a99d0cf5a4a215781a6b4e9fb2615d11c98ed73bc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 13:56:17 crc kubenswrapper[4833]: E0217 13:56:17.122527 4833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-g9798_openshift-operators_305d6a8c-4999-4fbb-a005-e115897f16c8_0(779a2d0c7910905a3476ec5a99d0cf5a4a215781a6b4e9fb2615d11c98ed73bc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-g9798" Feb 17 13:56:17 crc kubenswrapper[4833]: E0217 13:56:17.122547 4833 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-g9798_openshift-operators_305d6a8c-4999-4fbb-a005-e115897f16c8_0(779a2d0c7910905a3476ec5a99d0cf5a4a215781a6b4e9fb2615d11c98ed73bc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-g9798" Feb 17 13:56:17 crc kubenswrapper[4833]: E0217 13:56:17.122589 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-g9798_openshift-operators(305d6a8c-4999-4fbb-a005-e115897f16c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-g9798_openshift-operators(305d6a8c-4999-4fbb-a005-e115897f16c8)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-g9798_openshift-operators_305d6a8c-4999-4fbb-a005-e115897f16c8_0(779a2d0c7910905a3476ec5a99d0cf5a4a215781a6b4e9fb2615d11c98ed73bc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-g9798" podUID="305d6a8c-4999-4fbb-a005-e115897f16c8" Feb 17 13:56:17 crc kubenswrapper[4833]: I0217 13:56:17.129915 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c64c46959-7xhsf" Feb 17 13:56:17 crc kubenswrapper[4833]: E0217 13:56:17.132504 4833 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c64c46959-jrtt8_openshift-operators_9dedda0d-335a-43f4-b199-e9da78d5b37b_0(a83dba0c1ff1055179b2034f58ad2ca9b4145ac2403438d82506f15f0791ee4a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 13:56:17 crc kubenswrapper[4833]: E0217 13:56:17.132572 4833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c64c46959-jrtt8_openshift-operators_9dedda0d-335a-43f4-b199-e9da78d5b37b_0(a83dba0c1ff1055179b2034f58ad2ca9b4145ac2403438d82506f15f0791ee4a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c64c46959-jrtt8" Feb 17 13:56:17 crc kubenswrapper[4833]: E0217 13:56:17.132595 4833 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c64c46959-jrtt8_openshift-operators_9dedda0d-335a-43f4-b199-e9da78d5b37b_0(a83dba0c1ff1055179b2034f58ad2ca9b4145ac2403438d82506f15f0791ee4a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c64c46959-jrtt8" Feb 17 13:56:17 crc kubenswrapper[4833]: E0217 13:56:17.132635 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-5c64c46959-jrtt8_openshift-operators(9dedda0d-335a-43f4-b199-e9da78d5b37b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-5c64c46959-jrtt8_openshift-operators(9dedda0d-335a-43f4-b199-e9da78d5b37b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c64c46959-jrtt8_openshift-operators_9dedda0d-335a-43f4-b199-e9da78d5b37b_0(a83dba0c1ff1055179b2034f58ad2ca9b4145ac2403438d82506f15f0791ee4a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c64c46959-jrtt8" podUID="9dedda0d-335a-43f4-b199-e9da78d5b37b" Feb 17 13:56:17 crc kubenswrapper[4833]: E0217 13:56:17.150381 4833 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c64c46959-7xhsf_openshift-operators_05eabfc9-be7f-4663-8c09-7662798751cb_0(dd9b16bd5ac0095ee149e7ea25a531c9c6b875b4041e1420f98f45e9e66e1027): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 13:56:17 crc kubenswrapper[4833]: E0217 13:56:17.150452 4833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c64c46959-7xhsf_openshift-operators_05eabfc9-be7f-4663-8c09-7662798751cb_0(dd9b16bd5ac0095ee149e7ea25a531c9c6b875b4041e1420f98f45e9e66e1027): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c64c46959-7xhsf" Feb 17 13:56:17 crc kubenswrapper[4833]: E0217 13:56:17.150475 4833 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c64c46959-7xhsf_openshift-operators_05eabfc9-be7f-4663-8c09-7662798751cb_0(dd9b16bd5ac0095ee149e7ea25a531c9c6b875b4041e1420f98f45e9e66e1027): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c64c46959-7xhsf" Feb 17 13:56:17 crc kubenswrapper[4833]: E0217 13:56:17.150535 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-5c64c46959-7xhsf_openshift-operators(05eabfc9-be7f-4663-8c09-7662798751cb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-5c64c46959-7xhsf_openshift-operators(05eabfc9-be7f-4663-8c09-7662798751cb)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c64c46959-7xhsf_openshift-operators_05eabfc9-be7f-4663-8c09-7662798751cb_0(dd9b16bd5ac0095ee149e7ea25a531c9c6b875b4041e1420f98f45e9e66e1027): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c64c46959-7xhsf" podUID="05eabfc9-be7f-4663-8c09-7662798751cb" Feb 17 13:56:17 crc kubenswrapper[4833]: I0217 13:56:17.179056 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-hwr99" Feb 17 13:56:17 crc kubenswrapper[4833]: I0217 13:56:17.189463 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncld7\" (UniqueName: \"kubernetes.io/projected/fd47124f-cfc1-4a57-820c-b54cd2e4d113-kube-api-access-ncld7\") pod \"perses-operator-5bf474d74f-ckvt2\" (UID: \"fd47124f-cfc1-4a57-820c-b54cd2e4d113\") " pod="openshift-operators/perses-operator-5bf474d74f-ckvt2" Feb 17 13:56:17 crc kubenswrapper[4833]: I0217 13:56:17.189583 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/fd47124f-cfc1-4a57-820c-b54cd2e4d113-openshift-service-ca\") pod \"perses-operator-5bf474d74f-ckvt2\" (UID: \"fd47124f-cfc1-4a57-820c-b54cd2e4d113\") " pod="openshift-operators/perses-operator-5bf474d74f-ckvt2" Feb 17 13:56:17 crc kubenswrapper[4833]: E0217 13:56:17.223880 4833 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-hwr99_openshift-operators_153717ff-fc08-498f-a4ee-bf06ef5866ab_0(8cd1d88a8b6f020956ff1c29b0f8b40ec484644bba20a30cc2cbca6d1b4763e1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 13:56:17 crc kubenswrapper[4833]: E0217 13:56:17.224009 4833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-hwr99_openshift-operators_153717ff-fc08-498f-a4ee-bf06ef5866ab_0(8cd1d88a8b6f020956ff1c29b0f8b40ec484644bba20a30cc2cbca6d1b4763e1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-hwr99" Feb 17 13:56:17 crc kubenswrapper[4833]: E0217 13:56:17.224113 4833 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-hwr99_openshift-operators_153717ff-fc08-498f-a4ee-bf06ef5866ab_0(8cd1d88a8b6f020956ff1c29b0f8b40ec484644bba20a30cc2cbca6d1b4763e1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-hwr99" Feb 17 13:56:17 crc kubenswrapper[4833]: E0217 13:56:17.224214 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-hwr99_openshift-operators(153717ff-fc08-498f-a4ee-bf06ef5866ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-hwr99_openshift-operators(153717ff-fc08-498f-a4ee-bf06ef5866ab)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-hwr99_openshift-operators_153717ff-fc08-498f-a4ee-bf06ef5866ab_0(8cd1d88a8b6f020956ff1c29b0f8b40ec484644bba20a30cc2cbca6d1b4763e1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-hwr99" podUID="153717ff-fc08-498f-a4ee-bf06ef5866ab" Feb 17 13:56:17 crc kubenswrapper[4833]: I0217 13:56:17.291061 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/fd47124f-cfc1-4a57-820c-b54cd2e4d113-openshift-service-ca\") pod \"perses-operator-5bf474d74f-ckvt2\" (UID: \"fd47124f-cfc1-4a57-820c-b54cd2e4d113\") " pod="openshift-operators/perses-operator-5bf474d74f-ckvt2" Feb 17 13:56:17 crc kubenswrapper[4833]: I0217 13:56:17.291131 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncld7\" (UniqueName: \"kubernetes.io/projected/fd47124f-cfc1-4a57-820c-b54cd2e4d113-kube-api-access-ncld7\") pod \"perses-operator-5bf474d74f-ckvt2\" (UID: \"fd47124f-cfc1-4a57-820c-b54cd2e4d113\") " pod="openshift-operators/perses-operator-5bf474d74f-ckvt2" Feb 17 13:56:17 crc kubenswrapper[4833]: I0217 13:56:17.292233 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/fd47124f-cfc1-4a57-820c-b54cd2e4d113-openshift-service-ca\") pod \"perses-operator-5bf474d74f-ckvt2\" (UID: \"fd47124f-cfc1-4a57-820c-b54cd2e4d113\") " pod="openshift-operators/perses-operator-5bf474d74f-ckvt2" Feb 17 13:56:17 crc kubenswrapper[4833]: I0217 13:56:17.328272 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncld7\" (UniqueName: \"kubernetes.io/projected/fd47124f-cfc1-4a57-820c-b54cd2e4d113-kube-api-access-ncld7\") pod \"perses-operator-5bf474d74f-ckvt2\" (UID: \"fd47124f-cfc1-4a57-820c-b54cd2e4d113\") " pod="openshift-operators/perses-operator-5bf474d74f-ckvt2" Feb 17 13:56:17 crc kubenswrapper[4833]: I0217 13:56:17.352581 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-ckvt2" Feb 17 13:56:17 crc kubenswrapper[4833]: E0217 13:56:17.393218 4833 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-ckvt2_openshift-operators_fd47124f-cfc1-4a57-820c-b54cd2e4d113_0(a2943f087096bb657f40aa2e7a82a66aa1ab216baf595a70b87eddd7a0cfdbdd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 13:56:17 crc kubenswrapper[4833]: E0217 13:56:17.393308 4833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-ckvt2_openshift-operators_fd47124f-cfc1-4a57-820c-b54cd2e4d113_0(a2943f087096bb657f40aa2e7a82a66aa1ab216baf595a70b87eddd7a0cfdbdd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-ckvt2" Feb 17 13:56:17 crc kubenswrapper[4833]: E0217 13:56:17.393334 4833 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-ckvt2_openshift-operators_fd47124f-cfc1-4a57-820c-b54cd2e4d113_0(a2943f087096bb657f40aa2e7a82a66aa1ab216baf595a70b87eddd7a0cfdbdd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-ckvt2" Feb 17 13:56:17 crc kubenswrapper[4833]: E0217 13:56:17.393387 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-ckvt2_openshift-operators(fd47124f-cfc1-4a57-820c-b54cd2e4d113)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-ckvt2_openshift-operators(fd47124f-cfc1-4a57-820c-b54cd2e4d113)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-ckvt2_openshift-operators_fd47124f-cfc1-4a57-820c-b54cd2e4d113_0(a2943f087096bb657f40aa2e7a82a66aa1ab216baf595a70b87eddd7a0cfdbdd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-ckvt2" podUID="fd47124f-cfc1-4a57-820c-b54cd2e4d113" Feb 17 13:56:17 crc kubenswrapper[4833]: I0217 13:56:17.461017 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" event={"ID":"8a7e5f75-0918-4179-ba63-0daa9ceaf7e5","Type":"ContainerStarted","Data":"ec5b558eccef0bc997356526ec822d35bb739959429e29164f211c97a0536ef4"} Feb 17 13:56:17 crc kubenswrapper[4833]: I0217 13:56:17.462105 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" Feb 17 13:56:17 crc kubenswrapper[4833]: I0217 13:56:17.462134 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" Feb 17 13:56:17 crc kubenswrapper[4833]: I0217 13:56:17.462175 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" Feb 17 13:56:17 crc kubenswrapper[4833]: I0217 13:56:17.507934 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" Feb 17 13:56:17 crc kubenswrapper[4833]: I0217 13:56:17.511439 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" Feb 17 13:56:17 crc kubenswrapper[4833]: I0217 13:56:17.523938 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" podStartSLOduration=8.52392507 podStartE2EDuration="8.52392507s" podCreationTimestamp="2026-02-17 13:56:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:56:17.521172752 +0000 UTC m=+667.156272215" watchObservedRunningTime="2026-02-17 13:56:17.52392507 +0000 UTC m=+667.159024503" Feb 17 13:56:18 crc kubenswrapper[4833]: I0217 13:56:18.104850 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5c64c46959-jrtt8"] Feb 17 13:56:18 crc kubenswrapper[4833]: I0217 13:56:18.104969 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c64c46959-jrtt8" Feb 17 13:56:18 crc kubenswrapper[4833]: I0217 13:56:18.105363 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c64c46959-jrtt8" Feb 17 13:56:18 crc kubenswrapper[4833]: I0217 13:56:18.108287 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-ckvt2"] Feb 17 13:56:18 crc kubenswrapper[4833]: I0217 13:56:18.108385 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-ckvt2" Feb 17 13:56:18 crc kubenswrapper[4833]: I0217 13:56:18.108828 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-ckvt2" Feb 17 13:56:18 crc kubenswrapper[4833]: I0217 13:56:18.118458 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-g9798"] Feb 17 13:56:18 crc kubenswrapper[4833]: I0217 13:56:18.118585 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-g9798" Feb 17 13:56:18 crc kubenswrapper[4833]: I0217 13:56:18.118928 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-g9798" Feb 17 13:56:18 crc kubenswrapper[4833]: E0217 13:56:18.190318 4833 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-ckvt2_openshift-operators_fd47124f-cfc1-4a57-820c-b54cd2e4d113_0(5c41f3d661d9ba255de0f527ce1dfea9245986427037ee590ce25021b77ea089): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 13:56:18 crc kubenswrapper[4833]: E0217 13:56:18.190504 4833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-ckvt2_openshift-operators_fd47124f-cfc1-4a57-820c-b54cd2e4d113_0(5c41f3d661d9ba255de0f527ce1dfea9245986427037ee590ce25021b77ea089): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-ckvt2" Feb 17 13:56:18 crc kubenswrapper[4833]: E0217 13:56:18.190596 4833 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-ckvt2_openshift-operators_fd47124f-cfc1-4a57-820c-b54cd2e4d113_0(5c41f3d661d9ba255de0f527ce1dfea9245986427037ee590ce25021b77ea089): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-ckvt2" Feb 17 13:56:18 crc kubenswrapper[4833]: E0217 13:56:18.190690 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-ckvt2_openshift-operators(fd47124f-cfc1-4a57-820c-b54cd2e4d113)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-ckvt2_openshift-operators(fd47124f-cfc1-4a57-820c-b54cd2e4d113)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-ckvt2_openshift-operators_fd47124f-cfc1-4a57-820c-b54cd2e4d113_0(5c41f3d661d9ba255de0f527ce1dfea9245986427037ee590ce25021b77ea089): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-ckvt2" podUID="fd47124f-cfc1-4a57-820c-b54cd2e4d113" Feb 17 13:56:18 crc kubenswrapper[4833]: E0217 13:56:18.190996 4833 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c64c46959-jrtt8_openshift-operators_9dedda0d-335a-43f4-b199-e9da78d5b37b_0(aab1f3be8e26b7d560e4aa0c5799342025adb2cbb72370e2958ed59871268dae): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 13:56:18 crc kubenswrapper[4833]: E0217 13:56:18.191086 4833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c64c46959-jrtt8_openshift-operators_9dedda0d-335a-43f4-b199-e9da78d5b37b_0(aab1f3be8e26b7d560e4aa0c5799342025adb2cbb72370e2958ed59871268dae): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c64c46959-jrtt8" Feb 17 13:56:18 crc kubenswrapper[4833]: E0217 13:56:18.191119 4833 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c64c46959-jrtt8_openshift-operators_9dedda0d-335a-43f4-b199-e9da78d5b37b_0(aab1f3be8e26b7d560e4aa0c5799342025adb2cbb72370e2958ed59871268dae): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c64c46959-jrtt8" Feb 17 13:56:18 crc kubenswrapper[4833]: E0217 13:56:18.191170 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-5c64c46959-jrtt8_openshift-operators(9dedda0d-335a-43f4-b199-e9da78d5b37b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-5c64c46959-jrtt8_openshift-operators(9dedda0d-335a-43f4-b199-e9da78d5b37b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c64c46959-jrtt8_openshift-operators_9dedda0d-335a-43f4-b199-e9da78d5b37b_0(aab1f3be8e26b7d560e4aa0c5799342025adb2cbb72370e2958ed59871268dae): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c64c46959-jrtt8" podUID="9dedda0d-335a-43f4-b199-e9da78d5b37b" Feb 17 13:56:18 crc kubenswrapper[4833]: E0217 13:56:18.196908 4833 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-g9798_openshift-operators_305d6a8c-4999-4fbb-a005-e115897f16c8_0(aeeb727ce862dfb911a94b0f196f27a2d8daf73f07d511685ce4e8662d787d65): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 13:56:18 crc kubenswrapper[4833]: E0217 13:56:18.196954 4833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-g9798_openshift-operators_305d6a8c-4999-4fbb-a005-e115897f16c8_0(aeeb727ce862dfb911a94b0f196f27a2d8daf73f07d511685ce4e8662d787d65): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-g9798" Feb 17 13:56:18 crc kubenswrapper[4833]: E0217 13:56:18.196972 4833 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-g9798_openshift-operators_305d6a8c-4999-4fbb-a005-e115897f16c8_0(aeeb727ce862dfb911a94b0f196f27a2d8daf73f07d511685ce4e8662d787d65): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-g9798" Feb 17 13:56:18 crc kubenswrapper[4833]: E0217 13:56:18.197021 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-g9798_openshift-operators(305d6a8c-4999-4fbb-a005-e115897f16c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-g9798_openshift-operators(305d6a8c-4999-4fbb-a005-e115897f16c8)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-g9798_openshift-operators_305d6a8c-4999-4fbb-a005-e115897f16c8_0(aeeb727ce862dfb911a94b0f196f27a2d8daf73f07d511685ce4e8662d787d65): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-g9798" podUID="305d6a8c-4999-4fbb-a005-e115897f16c8" Feb 17 13:56:18 crc kubenswrapper[4833]: I0217 13:56:18.256460 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5c64c46959-7xhsf"] Feb 17 13:56:18 crc kubenswrapper[4833]: I0217 13:56:18.256591 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c64c46959-7xhsf" Feb 17 13:56:18 crc kubenswrapper[4833]: I0217 13:56:18.257100 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c64c46959-7xhsf" Feb 17 13:56:18 crc kubenswrapper[4833]: I0217 13:56:18.259386 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-hwr99"] Feb 17 13:56:18 crc kubenswrapper[4833]: I0217 13:56:18.259505 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-hwr99" Feb 17 13:56:18 crc kubenswrapper[4833]: I0217 13:56:18.259894 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-hwr99" Feb 17 13:56:18 crc kubenswrapper[4833]: E0217 13:56:18.299246 4833 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c64c46959-7xhsf_openshift-operators_05eabfc9-be7f-4663-8c09-7662798751cb_0(6b75b1e93c24cadcd97f9c8ebe7c64bae3f4ef31dab8824efb2aefa4c205060c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 13:56:18 crc kubenswrapper[4833]: E0217 13:56:18.299931 4833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c64c46959-7xhsf_openshift-operators_05eabfc9-be7f-4663-8c09-7662798751cb_0(6b75b1e93c24cadcd97f9c8ebe7c64bae3f4ef31dab8824efb2aefa4c205060c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c64c46959-7xhsf" Feb 17 13:56:18 crc kubenswrapper[4833]: E0217 13:56:18.299961 4833 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c64c46959-7xhsf_openshift-operators_05eabfc9-be7f-4663-8c09-7662798751cb_0(6b75b1e93c24cadcd97f9c8ebe7c64bae3f4ef31dab8824efb2aefa4c205060c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c64c46959-7xhsf" Feb 17 13:56:18 crc kubenswrapper[4833]: E0217 13:56:18.300018 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-5c64c46959-7xhsf_openshift-operators(05eabfc9-be7f-4663-8c09-7662798751cb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-5c64c46959-7xhsf_openshift-operators(05eabfc9-be7f-4663-8c09-7662798751cb)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c64c46959-7xhsf_openshift-operators_05eabfc9-be7f-4663-8c09-7662798751cb_0(6b75b1e93c24cadcd97f9c8ebe7c64bae3f4ef31dab8824efb2aefa4c205060c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c64c46959-7xhsf" podUID="05eabfc9-be7f-4663-8c09-7662798751cb" Feb 17 13:56:18 crc kubenswrapper[4833]: E0217 13:56:18.304000 4833 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-hwr99_openshift-operators_153717ff-fc08-498f-a4ee-bf06ef5866ab_0(5857cbfe42bbc384084e9d39f271f81a4e5931bc33e28dad0ee93f4ed8298396): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 13:56:18 crc kubenswrapper[4833]: E0217 13:56:18.304059 4833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-hwr99_openshift-operators_153717ff-fc08-498f-a4ee-bf06ef5866ab_0(5857cbfe42bbc384084e9d39f271f81a4e5931bc33e28dad0ee93f4ed8298396): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-hwr99" Feb 17 13:56:18 crc kubenswrapper[4833]: E0217 13:56:18.304077 4833 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-hwr99_openshift-operators_153717ff-fc08-498f-a4ee-bf06ef5866ab_0(5857cbfe42bbc384084e9d39f271f81a4e5931bc33e28dad0ee93f4ed8298396): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-hwr99" Feb 17 13:56:18 crc kubenswrapper[4833]: E0217 13:56:18.304110 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-hwr99_openshift-operators(153717ff-fc08-498f-a4ee-bf06ef5866ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-hwr99_openshift-operators(153717ff-fc08-498f-a4ee-bf06ef5866ab)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-hwr99_openshift-operators_153717ff-fc08-498f-a4ee-bf06ef5866ab_0(5857cbfe42bbc384084e9d39f271f81a4e5931bc33e28dad0ee93f4ed8298396): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-hwr99" podUID="153717ff-fc08-498f-a4ee-bf06ef5866ab" Feb 17 13:56:24 crc kubenswrapper[4833]: I0217 13:56:24.041956 4833 scope.go:117] "RemoveContainer" containerID="11b83835c273f377e2c85db9ff37901aa2d246ce6673d32ff9925526757a98b3" Feb 17 13:56:24 crc kubenswrapper[4833]: E0217 13:56:24.043892 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-wlt4c_openshift-multus(a3b8d3ca-f768-4129-9c1a-b4866dd852d4)\"" pod="openshift-multus/multus-wlt4c" podUID="a3b8d3ca-f768-4129-9c1a-b4866dd852d4" Feb 17 13:56:29 crc kubenswrapper[4833]: I0217 13:56:29.041032 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c64c46959-7xhsf" Feb 17 13:56:29 crc kubenswrapper[4833]: I0217 13:56:29.041612 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c64c46959-7xhsf" Feb 17 13:56:29 crc kubenswrapper[4833]: E0217 13:56:29.086427 4833 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c64c46959-7xhsf_openshift-operators_05eabfc9-be7f-4663-8c09-7662798751cb_0(9da9ed79e85380be270845f355972a551ac980bf8c017f5d9fd0579e6c34f35a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 13:56:29 crc kubenswrapper[4833]: E0217 13:56:29.086490 4833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c64c46959-7xhsf_openshift-operators_05eabfc9-be7f-4663-8c09-7662798751cb_0(9da9ed79e85380be270845f355972a551ac980bf8c017f5d9fd0579e6c34f35a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c64c46959-7xhsf" Feb 17 13:56:29 crc kubenswrapper[4833]: E0217 13:56:29.086517 4833 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c64c46959-7xhsf_openshift-operators_05eabfc9-be7f-4663-8c09-7662798751cb_0(9da9ed79e85380be270845f355972a551ac980bf8c017f5d9fd0579e6c34f35a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c64c46959-7xhsf" Feb 17 13:56:29 crc kubenswrapper[4833]: E0217 13:56:29.086566 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-5c64c46959-7xhsf_openshift-operators(05eabfc9-be7f-4663-8c09-7662798751cb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-5c64c46959-7xhsf_openshift-operators(05eabfc9-be7f-4663-8c09-7662798751cb)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c64c46959-7xhsf_openshift-operators_05eabfc9-be7f-4663-8c09-7662798751cb_0(9da9ed79e85380be270845f355972a551ac980bf8c017f5d9fd0579e6c34f35a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c64c46959-7xhsf" podUID="05eabfc9-be7f-4663-8c09-7662798751cb" Feb 17 13:56:30 crc kubenswrapper[4833]: I0217 13:56:30.041373 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-ckvt2" Feb 17 13:56:30 crc kubenswrapper[4833]: I0217 13:56:30.041472 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-hwr99" Feb 17 13:56:30 crc kubenswrapper[4833]: I0217 13:56:30.041946 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-ckvt2" Feb 17 13:56:30 crc kubenswrapper[4833]: I0217 13:56:30.042100 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-hwr99" Feb 17 13:56:30 crc kubenswrapper[4833]: E0217 13:56:30.082331 4833 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-ckvt2_openshift-operators_fd47124f-cfc1-4a57-820c-b54cd2e4d113_0(e84f67394ab5a965630d08077dcccc053ed6fff3816b52d3a218556c23786d9f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 13:56:30 crc kubenswrapper[4833]: E0217 13:56:30.082423 4833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-ckvt2_openshift-operators_fd47124f-cfc1-4a57-820c-b54cd2e4d113_0(e84f67394ab5a965630d08077dcccc053ed6fff3816b52d3a218556c23786d9f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-ckvt2" Feb 17 13:56:30 crc kubenswrapper[4833]: E0217 13:56:30.082453 4833 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-ckvt2_openshift-operators_fd47124f-cfc1-4a57-820c-b54cd2e4d113_0(e84f67394ab5a965630d08077dcccc053ed6fff3816b52d3a218556c23786d9f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-ckvt2" Feb 17 13:56:30 crc kubenswrapper[4833]: E0217 13:56:30.082514 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-ckvt2_openshift-operators(fd47124f-cfc1-4a57-820c-b54cd2e4d113)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-ckvt2_openshift-operators(fd47124f-cfc1-4a57-820c-b54cd2e4d113)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-ckvt2_openshift-operators_fd47124f-cfc1-4a57-820c-b54cd2e4d113_0(e84f67394ab5a965630d08077dcccc053ed6fff3816b52d3a218556c23786d9f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-ckvt2" podUID="fd47124f-cfc1-4a57-820c-b54cd2e4d113" Feb 17 13:56:30 crc kubenswrapper[4833]: E0217 13:56:30.089085 4833 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-hwr99_openshift-operators_153717ff-fc08-498f-a4ee-bf06ef5866ab_0(33351ec7226f1effb6f871d711ea3baaceb1e272cda52a15048eca791797c0f3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 13:56:30 crc kubenswrapper[4833]: E0217 13:56:30.089175 4833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-hwr99_openshift-operators_153717ff-fc08-498f-a4ee-bf06ef5866ab_0(33351ec7226f1effb6f871d711ea3baaceb1e272cda52a15048eca791797c0f3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-hwr99" Feb 17 13:56:30 crc kubenswrapper[4833]: E0217 13:56:30.089201 4833 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-hwr99_openshift-operators_153717ff-fc08-498f-a4ee-bf06ef5866ab_0(33351ec7226f1effb6f871d711ea3baaceb1e272cda52a15048eca791797c0f3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-hwr99" Feb 17 13:56:30 crc kubenswrapper[4833]: E0217 13:56:30.089278 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-hwr99_openshift-operators(153717ff-fc08-498f-a4ee-bf06ef5866ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-hwr99_openshift-operators(153717ff-fc08-498f-a4ee-bf06ef5866ab)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-hwr99_openshift-operators_153717ff-fc08-498f-a4ee-bf06ef5866ab_0(33351ec7226f1effb6f871d711ea3baaceb1e272cda52a15048eca791797c0f3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-hwr99" podUID="153717ff-fc08-498f-a4ee-bf06ef5866ab" Feb 17 13:56:32 crc kubenswrapper[4833]: I0217 13:56:32.041470 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-g9798" Feb 17 13:56:32 crc kubenswrapper[4833]: I0217 13:56:32.041903 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-g9798" Feb 17 13:56:32 crc kubenswrapper[4833]: E0217 13:56:32.064291 4833 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-g9798_openshift-operators_305d6a8c-4999-4fbb-a005-e115897f16c8_0(0710dc3c342166f8c13859e8b7c56160f37ebfa63db30ec6a8d62566b9e133a8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 13:56:32 crc kubenswrapper[4833]: E0217 13:56:32.064353 4833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-g9798_openshift-operators_305d6a8c-4999-4fbb-a005-e115897f16c8_0(0710dc3c342166f8c13859e8b7c56160f37ebfa63db30ec6a8d62566b9e133a8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-g9798" Feb 17 13:56:32 crc kubenswrapper[4833]: E0217 13:56:32.064376 4833 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-g9798_openshift-operators_305d6a8c-4999-4fbb-a005-e115897f16c8_0(0710dc3c342166f8c13859e8b7c56160f37ebfa63db30ec6a8d62566b9e133a8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-g9798" Feb 17 13:56:32 crc kubenswrapper[4833]: E0217 13:56:32.064414 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-g9798_openshift-operators(305d6a8c-4999-4fbb-a005-e115897f16c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-g9798_openshift-operators(305d6a8c-4999-4fbb-a005-e115897f16c8)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-g9798_openshift-operators_305d6a8c-4999-4fbb-a005-e115897f16c8_0(0710dc3c342166f8c13859e8b7c56160f37ebfa63db30ec6a8d62566b9e133a8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-g9798" podUID="305d6a8c-4999-4fbb-a005-e115897f16c8" Feb 17 13:56:33 crc kubenswrapper[4833]: I0217 13:56:33.040486 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c64c46959-jrtt8" Feb 17 13:56:33 crc kubenswrapper[4833]: I0217 13:56:33.041278 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c64c46959-jrtt8" Feb 17 13:56:33 crc kubenswrapper[4833]: E0217 13:56:33.071693 4833 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c64c46959-jrtt8_openshift-operators_9dedda0d-335a-43f4-b199-e9da78d5b37b_0(e0dbda82cf03b0b729559d37e2219c19dfda699b19fb54007a0978968e7ffa4b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 13:56:33 crc kubenswrapper[4833]: E0217 13:56:33.071765 4833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c64c46959-jrtt8_openshift-operators_9dedda0d-335a-43f4-b199-e9da78d5b37b_0(e0dbda82cf03b0b729559d37e2219c19dfda699b19fb54007a0978968e7ffa4b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c64c46959-jrtt8" Feb 17 13:56:33 crc kubenswrapper[4833]: E0217 13:56:33.071789 4833 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c64c46959-jrtt8_openshift-operators_9dedda0d-335a-43f4-b199-e9da78d5b37b_0(e0dbda82cf03b0b729559d37e2219c19dfda699b19fb54007a0978968e7ffa4b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c64c46959-jrtt8" Feb 17 13:56:33 crc kubenswrapper[4833]: E0217 13:56:33.071838 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-5c64c46959-jrtt8_openshift-operators(9dedda0d-335a-43f4-b199-e9da78d5b37b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-5c64c46959-jrtt8_openshift-operators(9dedda0d-335a-43f4-b199-e9da78d5b37b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c64c46959-jrtt8_openshift-operators_9dedda0d-335a-43f4-b199-e9da78d5b37b_0(e0dbda82cf03b0b729559d37e2219c19dfda699b19fb54007a0978968e7ffa4b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c64c46959-jrtt8" podUID="9dedda0d-335a-43f4-b199-e9da78d5b37b" Feb 17 13:56:36 crc kubenswrapper[4833]: I0217 13:56:36.041911 4833 scope.go:117] "RemoveContainer" containerID="11b83835c273f377e2c85db9ff37901aa2d246ce6673d32ff9925526757a98b3" Feb 17 13:56:36 crc kubenswrapper[4833]: I0217 13:56:36.559675 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wlt4c_a3b8d3ca-f768-4129-9c1a-b4866dd852d4/kube-multus/2.log" Feb 17 13:56:36 crc kubenswrapper[4833]: I0217 13:56:36.560181 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-wlt4c" event={"ID":"a3b8d3ca-f768-4129-9c1a-b4866dd852d4","Type":"ContainerStarted","Data":"b7ebaeb36dba5128ff4ec9d063d5197891f286da99e44056cdfd80b0a1e98903"} Feb 17 13:56:40 crc kubenswrapper[4833]: I0217 13:56:40.375721 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-lm5vc" Feb 17 13:56:41 crc kubenswrapper[4833]: I0217 13:56:41.040583 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-ckvt2" Feb 17 13:56:41 crc kubenswrapper[4833]: I0217 13:56:41.043694 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-ckvt2" Feb 17 13:56:41 crc kubenswrapper[4833]: I0217 13:56:41.497640 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-ckvt2"] Feb 17 13:56:41 crc kubenswrapper[4833]: W0217 13:56:41.504556 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd47124f_cfc1_4a57_820c_b54cd2e4d113.slice/crio-dd38f1b60ed368e5d3ce0db8d5849a72b5cb3ddd23d7ba6d85e73d928ac62092 WatchSource:0}: Error finding container dd38f1b60ed368e5d3ce0db8d5849a72b5cb3ddd23d7ba6d85e73d928ac62092: Status 404 returned error can't find the container with id dd38f1b60ed368e5d3ce0db8d5849a72b5cb3ddd23d7ba6d85e73d928ac62092 Feb 17 13:56:41 crc kubenswrapper[4833]: I0217 13:56:41.584893 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-ckvt2" event={"ID":"fd47124f-cfc1-4a57-820c-b54cd2e4d113","Type":"ContainerStarted","Data":"dd38f1b60ed368e5d3ce0db8d5849a72b5cb3ddd23d7ba6d85e73d928ac62092"} Feb 17 13:56:43 crc kubenswrapper[4833]: I0217 13:56:43.041170 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-hwr99" Feb 17 13:56:43 crc kubenswrapper[4833]: I0217 13:56:43.041196 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c64c46959-7xhsf" Feb 17 13:56:43 crc kubenswrapper[4833]: I0217 13:56:43.041933 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-hwr99" Feb 17 13:56:43 crc kubenswrapper[4833]: I0217 13:56:43.042081 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c64c46959-7xhsf" Feb 17 13:56:43 crc kubenswrapper[4833]: W0217 13:56:43.446185 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod153717ff_fc08_498f_a4ee_bf06ef5866ab.slice/crio-025c9339732aa1f303a966ca60c8e82cc0a29c5962eb6ac1ea07cc1d69013a2d WatchSource:0}: Error finding container 025c9339732aa1f303a966ca60c8e82cc0a29c5962eb6ac1ea07cc1d69013a2d: Status 404 returned error can't find the container with id 025c9339732aa1f303a966ca60c8e82cc0a29c5962eb6ac1ea07cc1d69013a2d Feb 17 13:56:43 crc kubenswrapper[4833]: I0217 13:56:43.447912 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-hwr99"] Feb 17 13:56:43 crc kubenswrapper[4833]: I0217 13:56:43.498030 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5c64c46959-7xhsf"] Feb 17 13:56:43 crc kubenswrapper[4833]: W0217 13:56:43.507138 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05eabfc9_be7f_4663_8c09_7662798751cb.slice/crio-8ecb9cd9456698b1f68c46663ff856812363f2c8008efa4d8d7868e4b93d337e WatchSource:0}: Error finding container 8ecb9cd9456698b1f68c46663ff856812363f2c8008efa4d8d7868e4b93d337e: Status 404 returned error can't find the container with id 8ecb9cd9456698b1f68c46663ff856812363f2c8008efa4d8d7868e4b93d337e Feb 17 13:56:43 crc kubenswrapper[4833]: I0217 13:56:43.596701 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-hwr99" event={"ID":"153717ff-fc08-498f-a4ee-bf06ef5866ab","Type":"ContainerStarted","Data":"025c9339732aa1f303a966ca60c8e82cc0a29c5962eb6ac1ea07cc1d69013a2d"} Feb 17 13:56:43 crc kubenswrapper[4833]: I0217 13:56:43.598434 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c64c46959-7xhsf" event={"ID":"05eabfc9-be7f-4663-8c09-7662798751cb","Type":"ContainerStarted","Data":"8ecb9cd9456698b1f68c46663ff856812363f2c8008efa4d8d7868e4b93d337e"} Feb 17 13:56:44 crc kubenswrapper[4833]: I0217 13:56:44.042353 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c64c46959-jrtt8" Feb 17 13:56:44 crc kubenswrapper[4833]: I0217 13:56:44.042631 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c64c46959-jrtt8" Feb 17 13:56:44 crc kubenswrapper[4833]: I0217 13:56:44.243501 4833 patch_prober.go:28] interesting pod/machine-config-daemon-nmzvl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 13:56:44 crc kubenswrapper[4833]: I0217 13:56:44.243796 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" podUID="f4a1ca83-1919-4f9c-82de-c849cbd50e70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 13:56:44 crc kubenswrapper[4833]: I0217 13:56:44.446542 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5c64c46959-jrtt8"] Feb 17 13:56:45 crc kubenswrapper[4833]: W0217 13:56:45.623208 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9dedda0d_335a_43f4_b199_e9da78d5b37b.slice/crio-5b160593c1ba4ddc8f5f90b871994c78ddd8ef1f257593a3ae57c0bb42cdf084 WatchSource:0}: Error finding container 5b160593c1ba4ddc8f5f90b871994c78ddd8ef1f257593a3ae57c0bb42cdf084: Status 404 returned error can't find the container with id 5b160593c1ba4ddc8f5f90b871994c78ddd8ef1f257593a3ae57c0bb42cdf084 Feb 17 13:56:46 crc kubenswrapper[4833]: I0217 13:56:46.614157 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c64c46959-jrtt8" event={"ID":"9dedda0d-335a-43f4-b199-e9da78d5b37b","Type":"ContainerStarted","Data":"5b160593c1ba4ddc8f5f90b871994c78ddd8ef1f257593a3ae57c0bb42cdf084"} Feb 17 13:56:46 crc kubenswrapper[4833]: I0217 13:56:46.615428 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-ckvt2" event={"ID":"fd47124f-cfc1-4a57-820c-b54cd2e4d113","Type":"ContainerStarted","Data":"bb341b262231dfc92ecc2bb4f8efb3d336e3e59a466876639c274f0c0272d5f5"} Feb 17 13:56:46 crc kubenswrapper[4833]: I0217 13:56:46.615655 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-ckvt2" Feb 17 13:56:47 crc kubenswrapper[4833]: I0217 13:56:47.045059 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-g9798" Feb 17 13:56:47 crc kubenswrapper[4833]: I0217 13:56:47.045534 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-g9798" Feb 17 13:56:47 crc kubenswrapper[4833]: I0217 13:56:47.353667 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-ckvt2" podStartSLOduration=27.174467502 podStartE2EDuration="31.353649212s" podCreationTimestamp="2026-02-17 13:56:16 +0000 UTC" firstStartedPulling="2026-02-17 13:56:41.506228218 +0000 UTC m=+691.141327651" lastFinishedPulling="2026-02-17 13:56:45.685409928 +0000 UTC m=+695.320509361" observedRunningTime="2026-02-17 13:56:46.635302039 +0000 UTC m=+696.270401472" watchObservedRunningTime="2026-02-17 13:56:47.353649212 +0000 UTC m=+696.988748655" Feb 17 13:56:47 crc kubenswrapper[4833]: I0217 13:56:47.355708 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-g9798"] Feb 17 13:56:47 crc kubenswrapper[4833]: W0217 13:56:47.361561 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod305d6a8c_4999_4fbb_a005_e115897f16c8.slice/crio-a69c043ad7fe129f8a46ca143a22bace8394f59184f8295def3c6b90ec070a32 WatchSource:0}: Error finding container a69c043ad7fe129f8a46ca143a22bace8394f59184f8295def3c6b90ec070a32: Status 404 returned error can't find the container with id a69c043ad7fe129f8a46ca143a22bace8394f59184f8295def3c6b90ec070a32 Feb 17 13:56:47 crc kubenswrapper[4833]: I0217 13:56:47.623706 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-g9798" event={"ID":"305d6a8c-4999-4fbb-a005-e115897f16c8","Type":"ContainerStarted","Data":"a69c043ad7fe129f8a46ca143a22bace8394f59184f8295def3c6b90ec070a32"} Feb 17 13:56:50 crc kubenswrapper[4833]: I0217 13:56:50.643194 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-hwr99" event={"ID":"153717ff-fc08-498f-a4ee-bf06ef5866ab","Type":"ContainerStarted","Data":"64aae0a8f7ebbe39e80f740892dc7a4ba30f1b2d3992112ea93f65805fe19269"} Feb 17 13:56:50 crc kubenswrapper[4833]: I0217 13:56:50.644227 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-hwr99" Feb 17 13:56:50 crc kubenswrapper[4833]: I0217 13:56:50.645646 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c64c46959-jrtt8" event={"ID":"9dedda0d-335a-43f4-b199-e9da78d5b37b","Type":"ContainerStarted","Data":"c4e1e05203918756b1b2708a97fc43415fb362fc34d84b9dced6e4e38de6478b"} Feb 17 13:56:50 crc kubenswrapper[4833]: I0217 13:56:50.645742 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-hwr99" Feb 17 13:56:50 crc kubenswrapper[4833]: I0217 13:56:50.647603 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c64c46959-7xhsf" event={"ID":"05eabfc9-be7f-4663-8c09-7662798751cb","Type":"ContainerStarted","Data":"8c4f265ee97c1f551cab9dda80b5e9cc0a6f83f4fc7c037cbcedd6a353d5a881"} Feb 17 13:56:50 crc kubenswrapper[4833]: I0217 13:56:50.667925 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-hwr99" podStartSLOduration=28.398315441 podStartE2EDuration="34.667904713s" podCreationTimestamp="2026-02-17 13:56:16 +0000 UTC" firstStartedPulling="2026-02-17 13:56:43.449412467 +0000 UTC m=+693.084511901" lastFinishedPulling="2026-02-17 13:56:49.71900174 +0000 UTC m=+699.354101173" observedRunningTime="2026-02-17 13:56:50.665476805 +0000 UTC m=+700.300576238" watchObservedRunningTime="2026-02-17 13:56:50.667904713 +0000 UTC m=+700.303004146" Feb 17 13:56:50 crc kubenswrapper[4833]: I0217 13:56:50.700157 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c64c46959-jrtt8" podStartSLOduration=31.607280697 podStartE2EDuration="35.700139865s" podCreationTimestamp="2026-02-17 13:56:15 +0000 UTC" firstStartedPulling="2026-02-17 13:56:45.625047121 +0000 UTC m=+695.260146574" lastFinishedPulling="2026-02-17 13:56:49.717906309 +0000 UTC m=+699.353005742" observedRunningTime="2026-02-17 13:56:50.691607384 +0000 UTC m=+700.326706817" watchObservedRunningTime="2026-02-17 13:56:50.700139865 +0000 UTC m=+700.335239298" Feb 17 13:56:50 crc kubenswrapper[4833]: I0217 13:56:50.760086 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c64c46959-7xhsf" podStartSLOduration=29.550401332 podStartE2EDuration="35.76006426s" podCreationTimestamp="2026-02-17 13:56:15 +0000 UTC" firstStartedPulling="2026-02-17 13:56:43.510258348 +0000 UTC m=+693.145357781" lastFinishedPulling="2026-02-17 13:56:49.719921276 +0000 UTC m=+699.355020709" observedRunningTime="2026-02-17 13:56:50.755357417 +0000 UTC m=+700.390456860" watchObservedRunningTime="2026-02-17 13:56:50.76006426 +0000 UTC m=+700.395163693" Feb 17 13:56:52 crc kubenswrapper[4833]: I0217 13:56:52.660597 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-g9798" event={"ID":"305d6a8c-4999-4fbb-a005-e115897f16c8","Type":"ContainerStarted","Data":"36f2e815eab5463d8c0cb8608685aa5e0ce9f973ee37b2ba1290a4b321ab881e"} Feb 17 13:56:56 crc kubenswrapper[4833]: I0217 13:56:56.812141 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-g9798" podStartSLOduration=37.610452396 podStartE2EDuration="41.812114542s" podCreationTimestamp="2026-02-17 13:56:15 +0000 UTC" firstStartedPulling="2026-02-17 13:56:47.364067707 +0000 UTC m=+696.999167140" lastFinishedPulling="2026-02-17 13:56:51.565729843 +0000 UTC m=+701.200829286" observedRunningTime="2026-02-17 13:56:52.67708221 +0000 UTC m=+702.312181683" watchObservedRunningTime="2026-02-17 13:56:56.812114542 +0000 UTC m=+706.447214015" Feb 17 13:56:56 crc kubenswrapper[4833]: I0217 13:56:56.814821 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4fqhq"] Feb 17 13:56:56 crc kubenswrapper[4833]: I0217 13:56:56.816083 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4fqhq" Feb 17 13:56:56 crc kubenswrapper[4833]: I0217 13:56:56.819020 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 17 13:56:56 crc kubenswrapper[4833]: I0217 13:56:56.831999 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4fqhq"] Feb 17 13:56:56 crc kubenswrapper[4833]: I0217 13:56:56.922496 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/19698a8a-e73a-4586-97f5-27e2e0bf0dff-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4fqhq\" (UID: \"19698a8a-e73a-4586-97f5-27e2e0bf0dff\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4fqhq" Feb 17 13:56:56 crc kubenswrapper[4833]: I0217 13:56:56.922670 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2f4k\" (UniqueName: \"kubernetes.io/projected/19698a8a-e73a-4586-97f5-27e2e0bf0dff-kube-api-access-h2f4k\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4fqhq\" (UID: \"19698a8a-e73a-4586-97f5-27e2e0bf0dff\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4fqhq" Feb 17 13:56:56 crc kubenswrapper[4833]: I0217 13:56:56.922712 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/19698a8a-e73a-4586-97f5-27e2e0bf0dff-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4fqhq\" (UID: \"19698a8a-e73a-4586-97f5-27e2e0bf0dff\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4fqhq" Feb 17 13:56:57 crc kubenswrapper[4833]: I0217 13:56:57.024053 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2f4k\" (UniqueName: \"kubernetes.io/projected/19698a8a-e73a-4586-97f5-27e2e0bf0dff-kube-api-access-h2f4k\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4fqhq\" (UID: \"19698a8a-e73a-4586-97f5-27e2e0bf0dff\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4fqhq" Feb 17 13:56:57 crc kubenswrapper[4833]: I0217 13:56:57.024109 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/19698a8a-e73a-4586-97f5-27e2e0bf0dff-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4fqhq\" (UID: \"19698a8a-e73a-4586-97f5-27e2e0bf0dff\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4fqhq" Feb 17 13:56:57 crc kubenswrapper[4833]: I0217 13:56:57.024167 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/19698a8a-e73a-4586-97f5-27e2e0bf0dff-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4fqhq\" (UID: \"19698a8a-e73a-4586-97f5-27e2e0bf0dff\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4fqhq" Feb 17 13:56:57 crc kubenswrapper[4833]: I0217 13:56:57.025017 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/19698a8a-e73a-4586-97f5-27e2e0bf0dff-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4fqhq\" (UID: \"19698a8a-e73a-4586-97f5-27e2e0bf0dff\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4fqhq" Feb 17 13:56:57 crc kubenswrapper[4833]: I0217 13:56:57.025659 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/19698a8a-e73a-4586-97f5-27e2e0bf0dff-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4fqhq\" (UID: \"19698a8a-e73a-4586-97f5-27e2e0bf0dff\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4fqhq" Feb 17 13:56:57 crc kubenswrapper[4833]: I0217 13:56:57.052822 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2f4k\" (UniqueName: \"kubernetes.io/projected/19698a8a-e73a-4586-97f5-27e2e0bf0dff-kube-api-access-h2f4k\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4fqhq\" (UID: \"19698a8a-e73a-4586-97f5-27e2e0bf0dff\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4fqhq" Feb 17 13:56:57 crc kubenswrapper[4833]: I0217 13:56:57.134977 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4fqhq" Feb 17 13:56:57 crc kubenswrapper[4833]: I0217 13:56:57.359954 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-ckvt2" Feb 17 13:56:57 crc kubenswrapper[4833]: I0217 13:56:57.360922 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4fqhq"] Feb 17 13:56:57 crc kubenswrapper[4833]: I0217 13:56:57.687848 4833 generic.go:334] "Generic (PLEG): container finished" podID="19698a8a-e73a-4586-97f5-27e2e0bf0dff" containerID="5ef1b80f39b3c793ce2fb0e219e3834d4bf0ab86a67faf6a288cfa7a18ededa5" exitCode=0 Feb 17 13:56:57 crc kubenswrapper[4833]: I0217 13:56:57.687892 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4fqhq" event={"ID":"19698a8a-e73a-4586-97f5-27e2e0bf0dff","Type":"ContainerDied","Data":"5ef1b80f39b3c793ce2fb0e219e3834d4bf0ab86a67faf6a288cfa7a18ededa5"} Feb 17 13:56:57 crc kubenswrapper[4833]: I0217 13:56:57.687917 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4fqhq" event={"ID":"19698a8a-e73a-4586-97f5-27e2e0bf0dff","Type":"ContainerStarted","Data":"ccfdf3663cb7f92525790443dd84c2d31f50792c3ace2840b138ab5ae2d76803"} Feb 17 13:56:59 crc kubenswrapper[4833]: I0217 13:56:59.700154 4833 generic.go:334] "Generic (PLEG): container finished" podID="19698a8a-e73a-4586-97f5-27e2e0bf0dff" containerID="86ab6c8777296c7fceb9ca998e2fa196d48cdbe15a0a650090d5243640143430" exitCode=0 Feb 17 13:56:59 crc kubenswrapper[4833]: I0217 13:56:59.700445 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4fqhq" event={"ID":"19698a8a-e73a-4586-97f5-27e2e0bf0dff","Type":"ContainerDied","Data":"86ab6c8777296c7fceb9ca998e2fa196d48cdbe15a0a650090d5243640143430"} Feb 17 13:57:00 crc kubenswrapper[4833]: I0217 13:57:00.709296 4833 generic.go:334] "Generic (PLEG): container finished" podID="19698a8a-e73a-4586-97f5-27e2e0bf0dff" containerID="266938909e4b1e8acf3f9f2ccb0f47e090d28671063fa5755ca27af2f8abd7f3" exitCode=0 Feb 17 13:57:00 crc kubenswrapper[4833]: I0217 13:57:00.709368 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4fqhq" event={"ID":"19698a8a-e73a-4586-97f5-27e2e0bf0dff","Type":"ContainerDied","Data":"266938909e4b1e8acf3f9f2ccb0f47e090d28671063fa5755ca27af2f8abd7f3"} Feb 17 13:57:02 crc kubenswrapper[4833]: I0217 13:57:02.021369 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4fqhq" Feb 17 13:57:02 crc kubenswrapper[4833]: I0217 13:57:02.200922 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/19698a8a-e73a-4586-97f5-27e2e0bf0dff-util\") pod \"19698a8a-e73a-4586-97f5-27e2e0bf0dff\" (UID: \"19698a8a-e73a-4586-97f5-27e2e0bf0dff\") " Feb 17 13:57:02 crc kubenswrapper[4833]: I0217 13:57:02.201066 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2f4k\" (UniqueName: \"kubernetes.io/projected/19698a8a-e73a-4586-97f5-27e2e0bf0dff-kube-api-access-h2f4k\") pod \"19698a8a-e73a-4586-97f5-27e2e0bf0dff\" (UID: \"19698a8a-e73a-4586-97f5-27e2e0bf0dff\") " Feb 17 13:57:02 crc kubenswrapper[4833]: I0217 13:57:02.201191 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/19698a8a-e73a-4586-97f5-27e2e0bf0dff-bundle\") pod \"19698a8a-e73a-4586-97f5-27e2e0bf0dff\" (UID: \"19698a8a-e73a-4586-97f5-27e2e0bf0dff\") " Feb 17 13:57:02 crc kubenswrapper[4833]: I0217 13:57:02.201851 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19698a8a-e73a-4586-97f5-27e2e0bf0dff-bundle" (OuterVolumeSpecName: "bundle") pod "19698a8a-e73a-4586-97f5-27e2e0bf0dff" (UID: "19698a8a-e73a-4586-97f5-27e2e0bf0dff"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:57:02 crc kubenswrapper[4833]: I0217 13:57:02.210371 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19698a8a-e73a-4586-97f5-27e2e0bf0dff-kube-api-access-h2f4k" (OuterVolumeSpecName: "kube-api-access-h2f4k") pod "19698a8a-e73a-4586-97f5-27e2e0bf0dff" (UID: "19698a8a-e73a-4586-97f5-27e2e0bf0dff"). InnerVolumeSpecName "kube-api-access-h2f4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:57:02 crc kubenswrapper[4833]: I0217 13:57:02.236869 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19698a8a-e73a-4586-97f5-27e2e0bf0dff-util" (OuterVolumeSpecName: "util") pod "19698a8a-e73a-4586-97f5-27e2e0bf0dff" (UID: "19698a8a-e73a-4586-97f5-27e2e0bf0dff"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:57:02 crc kubenswrapper[4833]: I0217 13:57:02.302257 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h2f4k\" (UniqueName: \"kubernetes.io/projected/19698a8a-e73a-4586-97f5-27e2e0bf0dff-kube-api-access-h2f4k\") on node \"crc\" DevicePath \"\"" Feb 17 13:57:02 crc kubenswrapper[4833]: I0217 13:57:02.302297 4833 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/19698a8a-e73a-4586-97f5-27e2e0bf0dff-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 13:57:02 crc kubenswrapper[4833]: I0217 13:57:02.302308 4833 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/19698a8a-e73a-4586-97f5-27e2e0bf0dff-util\") on node \"crc\" DevicePath \"\"" Feb 17 13:57:02 crc kubenswrapper[4833]: I0217 13:57:02.722200 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4fqhq" event={"ID":"19698a8a-e73a-4586-97f5-27e2e0bf0dff","Type":"ContainerDied","Data":"ccfdf3663cb7f92525790443dd84c2d31f50792c3ace2840b138ab5ae2d76803"} Feb 17 13:57:02 crc kubenswrapper[4833]: I0217 13:57:02.722237 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ccfdf3663cb7f92525790443dd84c2d31f50792c3ace2840b138ab5ae2d76803" Feb 17 13:57:02 crc kubenswrapper[4833]: I0217 13:57:02.722299 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4fqhq" Feb 17 13:57:08 crc kubenswrapper[4833]: I0217 13:57:08.399350 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-llrpt"] Feb 17 13:57:08 crc kubenswrapper[4833]: E0217 13:57:08.399822 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19698a8a-e73a-4586-97f5-27e2e0bf0dff" containerName="util" Feb 17 13:57:08 crc kubenswrapper[4833]: I0217 13:57:08.399834 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="19698a8a-e73a-4586-97f5-27e2e0bf0dff" containerName="util" Feb 17 13:57:08 crc kubenswrapper[4833]: E0217 13:57:08.399844 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19698a8a-e73a-4586-97f5-27e2e0bf0dff" containerName="extract" Feb 17 13:57:08 crc kubenswrapper[4833]: I0217 13:57:08.399850 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="19698a8a-e73a-4586-97f5-27e2e0bf0dff" containerName="extract" Feb 17 13:57:08 crc kubenswrapper[4833]: E0217 13:57:08.399863 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19698a8a-e73a-4586-97f5-27e2e0bf0dff" containerName="pull" Feb 17 13:57:08 crc kubenswrapper[4833]: I0217 13:57:08.399869 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="19698a8a-e73a-4586-97f5-27e2e0bf0dff" containerName="pull" Feb 17 13:57:08 crc kubenswrapper[4833]: I0217 13:57:08.399983 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="19698a8a-e73a-4586-97f5-27e2e0bf0dff" containerName="extract" Feb 17 13:57:08 crc kubenswrapper[4833]: I0217 13:57:08.400459 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-llrpt" Feb 17 13:57:08 crc kubenswrapper[4833]: I0217 13:57:08.402341 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 17 13:57:08 crc kubenswrapper[4833]: I0217 13:57:08.402465 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 17 13:57:08 crc kubenswrapper[4833]: I0217 13:57:08.406595 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-j64cp" Feb 17 13:57:08 crc kubenswrapper[4833]: I0217 13:57:08.415578 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-llrpt"] Feb 17 13:57:08 crc kubenswrapper[4833]: I0217 13:57:08.577875 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtbzt\" (UniqueName: \"kubernetes.io/projected/e4203154-718b-47fd-8727-178e1e2caeb7-kube-api-access-gtbzt\") pod \"nmstate-operator-694c9596b7-llrpt\" (UID: \"e4203154-718b-47fd-8727-178e1e2caeb7\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-llrpt" Feb 17 13:57:08 crc kubenswrapper[4833]: I0217 13:57:08.678922 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtbzt\" (UniqueName: \"kubernetes.io/projected/e4203154-718b-47fd-8727-178e1e2caeb7-kube-api-access-gtbzt\") pod \"nmstate-operator-694c9596b7-llrpt\" (UID: \"e4203154-718b-47fd-8727-178e1e2caeb7\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-llrpt" Feb 17 13:57:08 crc kubenswrapper[4833]: I0217 13:57:08.700417 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtbzt\" (UniqueName: \"kubernetes.io/projected/e4203154-718b-47fd-8727-178e1e2caeb7-kube-api-access-gtbzt\") pod \"nmstate-operator-694c9596b7-llrpt\" (UID: \"e4203154-718b-47fd-8727-178e1e2caeb7\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-llrpt" Feb 17 13:57:08 crc kubenswrapper[4833]: I0217 13:57:08.717215 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-llrpt" Feb 17 13:57:08 crc kubenswrapper[4833]: I0217 13:57:08.904505 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-llrpt"] Feb 17 13:57:09 crc kubenswrapper[4833]: I0217 13:57:09.769591 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-llrpt" event={"ID":"e4203154-718b-47fd-8727-178e1e2caeb7","Type":"ContainerStarted","Data":"4712e1df7e273c2d145eba761bd1ef3fe0e05852b79eeb0dbe51a85542a2e851"} Feb 17 13:57:11 crc kubenswrapper[4833]: I0217 13:57:11.783683 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-llrpt" event={"ID":"e4203154-718b-47fd-8727-178e1e2caeb7","Type":"ContainerStarted","Data":"03e6ec0eebd394b2f2d2ac03ca191e0dff0dab7b55878f05a522c284cc3cffb1"} Feb 17 13:57:11 crc kubenswrapper[4833]: I0217 13:57:11.804671 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-llrpt" podStartSLOduration=1.997756883 podStartE2EDuration="3.804642474s" podCreationTimestamp="2026-02-17 13:57:08 +0000 UTC" firstStartedPulling="2026-02-17 13:57:08.915018946 +0000 UTC m=+718.550118379" lastFinishedPulling="2026-02-17 13:57:10.721904537 +0000 UTC m=+720.357003970" observedRunningTime="2026-02-17 13:57:11.803053029 +0000 UTC m=+721.438152482" watchObservedRunningTime="2026-02-17 13:57:11.804642474 +0000 UTC m=+721.439741937" Feb 17 13:57:14 crc kubenswrapper[4833]: I0217 13:57:14.243645 4833 patch_prober.go:28] interesting pod/machine-config-daemon-nmzvl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 13:57:14 crc kubenswrapper[4833]: I0217 13:57:14.244031 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" podUID="f4a1ca83-1919-4f9c-82de-c849cbd50e70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 13:57:17 crc kubenswrapper[4833]: I0217 13:57:17.881892 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-jlgbt"] Feb 17 13:57:17 crc kubenswrapper[4833]: I0217 13:57:17.883285 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-jlgbt" Feb 17 13:57:17 crc kubenswrapper[4833]: I0217 13:57:17.885006 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-mgsrs" Feb 17 13:57:17 crc kubenswrapper[4833]: I0217 13:57:17.892637 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-jlgbt"] Feb 17 13:57:17 crc kubenswrapper[4833]: I0217 13:57:17.897750 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-5cfll"] Feb 17 13:57:17 crc kubenswrapper[4833]: I0217 13:57:17.898583 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-5cfll" Feb 17 13:57:17 crc kubenswrapper[4833]: I0217 13:57:17.901156 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 17 13:57:17 crc kubenswrapper[4833]: I0217 13:57:17.920827 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-5cfll"] Feb 17 13:57:17 crc kubenswrapper[4833]: I0217 13:57:17.942240 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-dz6xg"] Feb 17 13:57:17 crc kubenswrapper[4833]: I0217 13:57:17.943213 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-dz6xg" Feb 17 13:57:17 crc kubenswrapper[4833]: I0217 13:57:17.990773 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kr92\" (UniqueName: \"kubernetes.io/projected/e60895e0-eccc-4e5c-adbe-c2286162959e-kube-api-access-6kr92\") pod \"nmstate-metrics-58c85c668d-jlgbt\" (UID: \"e60895e0-eccc-4e5c-adbe-c2286162959e\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-jlgbt" Feb 17 13:57:18 crc kubenswrapper[4833]: I0217 13:57:18.021204 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-665f5"] Feb 17 13:57:18 crc kubenswrapper[4833]: I0217 13:57:18.022001 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-665f5" Feb 17 13:57:18 crc kubenswrapper[4833]: I0217 13:57:18.026719 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-hxcc9" Feb 17 13:57:18 crc kubenswrapper[4833]: I0217 13:57:18.027135 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 17 13:57:18 crc kubenswrapper[4833]: I0217 13:57:18.030182 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 17 13:57:18 crc kubenswrapper[4833]: I0217 13:57:18.036808 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-665f5"] Feb 17 13:57:18 crc kubenswrapper[4833]: I0217 13:57:18.091667 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/a9cf2a9a-b92f-41f1-a73f-f878ecd4ec66-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-5cfll\" (UID: \"a9cf2a9a-b92f-41f1-a73f-f878ecd4ec66\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-5cfll" Feb 17 13:57:18 crc kubenswrapper[4833]: I0217 13:57:18.091706 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/575ecda8-5096-4a4b-8bea-7163a6233bc2-dbus-socket\") pod \"nmstate-handler-dz6xg\" (UID: \"575ecda8-5096-4a4b-8bea-7163a6233bc2\") " pod="openshift-nmstate/nmstate-handler-dz6xg" Feb 17 13:57:18 crc kubenswrapper[4833]: I0217 13:57:18.091725 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9mgp\" (UniqueName: \"kubernetes.io/projected/575ecda8-5096-4a4b-8bea-7163a6233bc2-kube-api-access-q9mgp\") pod \"nmstate-handler-dz6xg\" (UID: \"575ecda8-5096-4a4b-8bea-7163a6233bc2\") " pod="openshift-nmstate/nmstate-handler-dz6xg" Feb 17 13:57:18 crc kubenswrapper[4833]: I0217 13:57:18.091754 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/575ecda8-5096-4a4b-8bea-7163a6233bc2-ovs-socket\") pod \"nmstate-handler-dz6xg\" (UID: \"575ecda8-5096-4a4b-8bea-7163a6233bc2\") " pod="openshift-nmstate/nmstate-handler-dz6xg" Feb 17 13:57:18 crc kubenswrapper[4833]: I0217 13:57:18.091788 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qc5ch\" (UniqueName: \"kubernetes.io/projected/a9cf2a9a-b92f-41f1-a73f-f878ecd4ec66-kube-api-access-qc5ch\") pod \"nmstate-webhook-866bcb46dc-5cfll\" (UID: \"a9cf2a9a-b92f-41f1-a73f-f878ecd4ec66\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-5cfll" Feb 17 13:57:18 crc kubenswrapper[4833]: I0217 13:57:18.091814 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kr92\" (UniqueName: \"kubernetes.io/projected/e60895e0-eccc-4e5c-adbe-c2286162959e-kube-api-access-6kr92\") pod \"nmstate-metrics-58c85c668d-jlgbt\" (UID: \"e60895e0-eccc-4e5c-adbe-c2286162959e\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-jlgbt" Feb 17 13:57:18 crc kubenswrapper[4833]: I0217 13:57:18.091856 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/575ecda8-5096-4a4b-8bea-7163a6233bc2-nmstate-lock\") pod \"nmstate-handler-dz6xg\" (UID: \"575ecda8-5096-4a4b-8bea-7163a6233bc2\") " pod="openshift-nmstate/nmstate-handler-dz6xg" Feb 17 13:57:18 crc kubenswrapper[4833]: I0217 13:57:18.109909 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kr92\" (UniqueName: \"kubernetes.io/projected/e60895e0-eccc-4e5c-adbe-c2286162959e-kube-api-access-6kr92\") pod \"nmstate-metrics-58c85c668d-jlgbt\" (UID: \"e60895e0-eccc-4e5c-adbe-c2286162959e\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-jlgbt" Feb 17 13:57:18 crc kubenswrapper[4833]: I0217 13:57:18.195424 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/575ecda8-5096-4a4b-8bea-7163a6233bc2-ovs-socket\") pod \"nmstate-handler-dz6xg\" (UID: \"575ecda8-5096-4a4b-8bea-7163a6233bc2\") " pod="openshift-nmstate/nmstate-handler-dz6xg" Feb 17 13:57:18 crc kubenswrapper[4833]: I0217 13:57:18.195488 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qc5ch\" (UniqueName: \"kubernetes.io/projected/a9cf2a9a-b92f-41f1-a73f-f878ecd4ec66-kube-api-access-qc5ch\") pod \"nmstate-webhook-866bcb46dc-5cfll\" (UID: \"a9cf2a9a-b92f-41f1-a73f-f878ecd4ec66\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-5cfll" Feb 17 13:57:18 crc kubenswrapper[4833]: I0217 13:57:18.195557 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/64dfe4ce-e2b1-4005-88af-c8ad6b10f4aa-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-665f5\" (UID: \"64dfe4ce-e2b1-4005-88af-c8ad6b10f4aa\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-665f5" Feb 17 13:57:18 crc kubenswrapper[4833]: I0217 13:57:18.195586 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jkdt\" (UniqueName: \"kubernetes.io/projected/64dfe4ce-e2b1-4005-88af-c8ad6b10f4aa-kube-api-access-8jkdt\") pod \"nmstate-console-plugin-5c78fc5d65-665f5\" (UID: \"64dfe4ce-e2b1-4005-88af-c8ad6b10f4aa\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-665f5" Feb 17 13:57:18 crc kubenswrapper[4833]: I0217 13:57:18.195621 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/575ecda8-5096-4a4b-8bea-7163a6233bc2-nmstate-lock\") pod \"nmstate-handler-dz6xg\" (UID: \"575ecda8-5096-4a4b-8bea-7163a6233bc2\") " pod="openshift-nmstate/nmstate-handler-dz6xg" Feb 17 13:57:18 crc kubenswrapper[4833]: I0217 13:57:18.195612 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/575ecda8-5096-4a4b-8bea-7163a6233bc2-ovs-socket\") pod \"nmstate-handler-dz6xg\" (UID: \"575ecda8-5096-4a4b-8bea-7163a6233bc2\") " pod="openshift-nmstate/nmstate-handler-dz6xg" Feb 17 13:57:18 crc kubenswrapper[4833]: I0217 13:57:18.195647 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/64dfe4ce-e2b1-4005-88af-c8ad6b10f4aa-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-665f5\" (UID: \"64dfe4ce-e2b1-4005-88af-c8ad6b10f4aa\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-665f5" Feb 17 13:57:18 crc kubenswrapper[4833]: I0217 13:57:18.195706 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/575ecda8-5096-4a4b-8bea-7163a6233bc2-nmstate-lock\") pod \"nmstate-handler-dz6xg\" (UID: \"575ecda8-5096-4a4b-8bea-7163a6233bc2\") " pod="openshift-nmstate/nmstate-handler-dz6xg" Feb 17 13:57:18 crc kubenswrapper[4833]: I0217 13:57:18.195811 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/a9cf2a9a-b92f-41f1-a73f-f878ecd4ec66-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-5cfll\" (UID: \"a9cf2a9a-b92f-41f1-a73f-f878ecd4ec66\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-5cfll" Feb 17 13:57:18 crc kubenswrapper[4833]: I0217 13:57:18.195853 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/575ecda8-5096-4a4b-8bea-7163a6233bc2-dbus-socket\") pod \"nmstate-handler-dz6xg\" (UID: \"575ecda8-5096-4a4b-8bea-7163a6233bc2\") " pod="openshift-nmstate/nmstate-handler-dz6xg" Feb 17 13:57:18 crc kubenswrapper[4833]: I0217 13:57:18.195903 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9mgp\" (UniqueName: \"kubernetes.io/projected/575ecda8-5096-4a4b-8bea-7163a6233bc2-kube-api-access-q9mgp\") pod \"nmstate-handler-dz6xg\" (UID: \"575ecda8-5096-4a4b-8bea-7163a6233bc2\") " pod="openshift-nmstate/nmstate-handler-dz6xg" Feb 17 13:57:18 crc kubenswrapper[4833]: E0217 13:57:18.196179 4833 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Feb 17 13:57:18 crc kubenswrapper[4833]: E0217 13:57:18.196252 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a9cf2a9a-b92f-41f1-a73f-f878ecd4ec66-tls-key-pair podName:a9cf2a9a-b92f-41f1-a73f-f878ecd4ec66 nodeName:}" failed. No retries permitted until 2026-02-17 13:57:18.696228988 +0000 UTC m=+728.331328421 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/a9cf2a9a-b92f-41f1-a73f-f878ecd4ec66-tls-key-pair") pod "nmstate-webhook-866bcb46dc-5cfll" (UID: "a9cf2a9a-b92f-41f1-a73f-f878ecd4ec66") : secret "openshift-nmstate-webhook" not found Feb 17 13:57:18 crc kubenswrapper[4833]: I0217 13:57:18.196309 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/575ecda8-5096-4a4b-8bea-7163a6233bc2-dbus-socket\") pod \"nmstate-handler-dz6xg\" (UID: \"575ecda8-5096-4a4b-8bea-7163a6233bc2\") " pod="openshift-nmstate/nmstate-handler-dz6xg" Feb 17 13:57:18 crc kubenswrapper[4833]: I0217 13:57:18.201549 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-jlgbt" Feb 17 13:57:18 crc kubenswrapper[4833]: I0217 13:57:18.218330 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qc5ch\" (UniqueName: \"kubernetes.io/projected/a9cf2a9a-b92f-41f1-a73f-f878ecd4ec66-kube-api-access-qc5ch\") pod \"nmstate-webhook-866bcb46dc-5cfll\" (UID: \"a9cf2a9a-b92f-41f1-a73f-f878ecd4ec66\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-5cfll" Feb 17 13:57:18 crc kubenswrapper[4833]: I0217 13:57:18.233184 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5f6c5d6894-pl4qb"] Feb 17 13:57:18 crc kubenswrapper[4833]: I0217 13:57:18.234020 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5f6c5d6894-pl4qb" Feb 17 13:57:18 crc kubenswrapper[4833]: I0217 13:57:18.237919 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9mgp\" (UniqueName: \"kubernetes.io/projected/575ecda8-5096-4a4b-8bea-7163a6233bc2-kube-api-access-q9mgp\") pod \"nmstate-handler-dz6xg\" (UID: \"575ecda8-5096-4a4b-8bea-7163a6233bc2\") " pod="openshift-nmstate/nmstate-handler-dz6xg" Feb 17 13:57:18 crc kubenswrapper[4833]: I0217 13:57:18.261717 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-dz6xg" Feb 17 13:57:18 crc kubenswrapper[4833]: I0217 13:57:18.268053 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5f6c5d6894-pl4qb"] Feb 17 13:57:18 crc kubenswrapper[4833]: I0217 13:57:18.297867 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/64dfe4ce-e2b1-4005-88af-c8ad6b10f4aa-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-665f5\" (UID: \"64dfe4ce-e2b1-4005-88af-c8ad6b10f4aa\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-665f5" Feb 17 13:57:18 crc kubenswrapper[4833]: I0217 13:57:18.297917 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jkdt\" (UniqueName: \"kubernetes.io/projected/64dfe4ce-e2b1-4005-88af-c8ad6b10f4aa-kube-api-access-8jkdt\") pod \"nmstate-console-plugin-5c78fc5d65-665f5\" (UID: \"64dfe4ce-e2b1-4005-88af-c8ad6b10f4aa\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-665f5" Feb 17 13:57:18 crc kubenswrapper[4833]: I0217 13:57:18.297957 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/64dfe4ce-e2b1-4005-88af-c8ad6b10f4aa-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-665f5\" (UID: \"64dfe4ce-e2b1-4005-88af-c8ad6b10f4aa\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-665f5" Feb 17 13:57:18 crc kubenswrapper[4833]: I0217 13:57:18.300904 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/64dfe4ce-e2b1-4005-88af-c8ad6b10f4aa-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-665f5\" (UID: \"64dfe4ce-e2b1-4005-88af-c8ad6b10f4aa\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-665f5" Feb 17 13:57:18 crc kubenswrapper[4833]: I0217 13:57:18.304345 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/64dfe4ce-e2b1-4005-88af-c8ad6b10f4aa-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-665f5\" (UID: \"64dfe4ce-e2b1-4005-88af-c8ad6b10f4aa\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-665f5" Feb 17 13:57:18 crc kubenswrapper[4833]: I0217 13:57:18.316944 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jkdt\" (UniqueName: \"kubernetes.io/projected/64dfe4ce-e2b1-4005-88af-c8ad6b10f4aa-kube-api-access-8jkdt\") pod \"nmstate-console-plugin-5c78fc5d65-665f5\" (UID: \"64dfe4ce-e2b1-4005-88af-c8ad6b10f4aa\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-665f5" Feb 17 13:57:18 crc kubenswrapper[4833]: I0217 13:57:18.341945 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-665f5" Feb 17 13:57:18 crc kubenswrapper[4833]: I0217 13:57:18.398790 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f9c29e17-ba51-4407-8c61-31aa13d8afb4-console-serving-cert\") pod \"console-5f6c5d6894-pl4qb\" (UID: \"f9c29e17-ba51-4407-8c61-31aa13d8afb4\") " pod="openshift-console/console-5f6c5d6894-pl4qb" Feb 17 13:57:18 crc kubenswrapper[4833]: I0217 13:57:18.398822 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f9c29e17-ba51-4407-8c61-31aa13d8afb4-console-oauth-config\") pod \"console-5f6c5d6894-pl4qb\" (UID: \"f9c29e17-ba51-4407-8c61-31aa13d8afb4\") " pod="openshift-console/console-5f6c5d6894-pl4qb" Feb 17 13:57:18 crc kubenswrapper[4833]: I0217 13:57:18.398851 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f9c29e17-ba51-4407-8c61-31aa13d8afb4-oauth-serving-cert\") pod \"console-5f6c5d6894-pl4qb\" (UID: \"f9c29e17-ba51-4407-8c61-31aa13d8afb4\") " pod="openshift-console/console-5f6c5d6894-pl4qb" Feb 17 13:57:18 crc kubenswrapper[4833]: I0217 13:57:18.398877 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f9c29e17-ba51-4407-8c61-31aa13d8afb4-console-config\") pod \"console-5f6c5d6894-pl4qb\" (UID: \"f9c29e17-ba51-4407-8c61-31aa13d8afb4\") " pod="openshift-console/console-5f6c5d6894-pl4qb" Feb 17 13:57:18 crc kubenswrapper[4833]: I0217 13:57:18.398895 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jnfz\" (UniqueName: \"kubernetes.io/projected/f9c29e17-ba51-4407-8c61-31aa13d8afb4-kube-api-access-2jnfz\") pod \"console-5f6c5d6894-pl4qb\" (UID: \"f9c29e17-ba51-4407-8c61-31aa13d8afb4\") " pod="openshift-console/console-5f6c5d6894-pl4qb" Feb 17 13:57:18 crc kubenswrapper[4833]: I0217 13:57:18.398959 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f9c29e17-ba51-4407-8c61-31aa13d8afb4-service-ca\") pod \"console-5f6c5d6894-pl4qb\" (UID: \"f9c29e17-ba51-4407-8c61-31aa13d8afb4\") " pod="openshift-console/console-5f6c5d6894-pl4qb" Feb 17 13:57:18 crc kubenswrapper[4833]: I0217 13:57:18.398988 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9c29e17-ba51-4407-8c61-31aa13d8afb4-trusted-ca-bundle\") pod \"console-5f6c5d6894-pl4qb\" (UID: \"f9c29e17-ba51-4407-8c61-31aa13d8afb4\") " pod="openshift-console/console-5f6c5d6894-pl4qb" Feb 17 13:57:18 crc kubenswrapper[4833]: I0217 13:57:18.500558 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f9c29e17-ba51-4407-8c61-31aa13d8afb4-oauth-serving-cert\") pod \"console-5f6c5d6894-pl4qb\" (UID: \"f9c29e17-ba51-4407-8c61-31aa13d8afb4\") " pod="openshift-console/console-5f6c5d6894-pl4qb" Feb 17 13:57:18 crc kubenswrapper[4833]: I0217 13:57:18.500886 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f9c29e17-ba51-4407-8c61-31aa13d8afb4-console-config\") pod \"console-5f6c5d6894-pl4qb\" (UID: \"f9c29e17-ba51-4407-8c61-31aa13d8afb4\") " pod="openshift-console/console-5f6c5d6894-pl4qb" Feb 17 13:57:18 crc kubenswrapper[4833]: I0217 13:57:18.500913 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jnfz\" (UniqueName: \"kubernetes.io/projected/f9c29e17-ba51-4407-8c61-31aa13d8afb4-kube-api-access-2jnfz\") pod \"console-5f6c5d6894-pl4qb\" (UID: \"f9c29e17-ba51-4407-8c61-31aa13d8afb4\") " pod="openshift-console/console-5f6c5d6894-pl4qb" Feb 17 13:57:18 crc kubenswrapper[4833]: I0217 13:57:18.500958 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f9c29e17-ba51-4407-8c61-31aa13d8afb4-service-ca\") pod \"console-5f6c5d6894-pl4qb\" (UID: \"f9c29e17-ba51-4407-8c61-31aa13d8afb4\") " pod="openshift-console/console-5f6c5d6894-pl4qb" Feb 17 13:57:18 crc kubenswrapper[4833]: I0217 13:57:18.501005 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9c29e17-ba51-4407-8c61-31aa13d8afb4-trusted-ca-bundle\") pod \"console-5f6c5d6894-pl4qb\" (UID: \"f9c29e17-ba51-4407-8c61-31aa13d8afb4\") " pod="openshift-console/console-5f6c5d6894-pl4qb" Feb 17 13:57:18 crc kubenswrapper[4833]: I0217 13:57:18.501095 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f9c29e17-ba51-4407-8c61-31aa13d8afb4-console-serving-cert\") pod \"console-5f6c5d6894-pl4qb\" (UID: \"f9c29e17-ba51-4407-8c61-31aa13d8afb4\") " pod="openshift-console/console-5f6c5d6894-pl4qb" Feb 17 13:57:18 crc kubenswrapper[4833]: I0217 13:57:18.501124 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f9c29e17-ba51-4407-8c61-31aa13d8afb4-console-oauth-config\") pod \"console-5f6c5d6894-pl4qb\" (UID: \"f9c29e17-ba51-4407-8c61-31aa13d8afb4\") " pod="openshift-console/console-5f6c5d6894-pl4qb" Feb 17 13:57:18 crc kubenswrapper[4833]: I0217 13:57:18.501489 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f9c29e17-ba51-4407-8c61-31aa13d8afb4-oauth-serving-cert\") pod \"console-5f6c5d6894-pl4qb\" (UID: \"f9c29e17-ba51-4407-8c61-31aa13d8afb4\") " pod="openshift-console/console-5f6c5d6894-pl4qb" Feb 17 13:57:18 crc kubenswrapper[4833]: I0217 13:57:18.501755 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f9c29e17-ba51-4407-8c61-31aa13d8afb4-console-config\") pod \"console-5f6c5d6894-pl4qb\" (UID: \"f9c29e17-ba51-4407-8c61-31aa13d8afb4\") " pod="openshift-console/console-5f6c5d6894-pl4qb" Feb 17 13:57:18 crc kubenswrapper[4833]: I0217 13:57:18.502404 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f9c29e17-ba51-4407-8c61-31aa13d8afb4-service-ca\") pod \"console-5f6c5d6894-pl4qb\" (UID: \"f9c29e17-ba51-4407-8c61-31aa13d8afb4\") " pod="openshift-console/console-5f6c5d6894-pl4qb" Feb 17 13:57:18 crc kubenswrapper[4833]: I0217 13:57:18.502521 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9c29e17-ba51-4407-8c61-31aa13d8afb4-trusted-ca-bundle\") pod \"console-5f6c5d6894-pl4qb\" (UID: \"f9c29e17-ba51-4407-8c61-31aa13d8afb4\") " pod="openshift-console/console-5f6c5d6894-pl4qb" Feb 17 13:57:18 crc kubenswrapper[4833]: I0217 13:57:18.504557 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f9c29e17-ba51-4407-8c61-31aa13d8afb4-console-serving-cert\") pod \"console-5f6c5d6894-pl4qb\" (UID: \"f9c29e17-ba51-4407-8c61-31aa13d8afb4\") " pod="openshift-console/console-5f6c5d6894-pl4qb" Feb 17 13:57:18 crc kubenswrapper[4833]: I0217 13:57:18.506709 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f9c29e17-ba51-4407-8c61-31aa13d8afb4-console-oauth-config\") pod \"console-5f6c5d6894-pl4qb\" (UID: \"f9c29e17-ba51-4407-8c61-31aa13d8afb4\") " pod="openshift-console/console-5f6c5d6894-pl4qb" Feb 17 13:57:18 crc kubenswrapper[4833]: I0217 13:57:18.519859 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jnfz\" (UniqueName: \"kubernetes.io/projected/f9c29e17-ba51-4407-8c61-31aa13d8afb4-kube-api-access-2jnfz\") pod \"console-5f6c5d6894-pl4qb\" (UID: \"f9c29e17-ba51-4407-8c61-31aa13d8afb4\") " pod="openshift-console/console-5f6c5d6894-pl4qb" Feb 17 13:57:18 crc kubenswrapper[4833]: I0217 13:57:18.520290 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-665f5"] Feb 17 13:57:18 crc kubenswrapper[4833]: I0217 13:57:18.608353 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5f6c5d6894-pl4qb" Feb 17 13:57:18 crc kubenswrapper[4833]: I0217 13:57:18.682239 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-jlgbt"] Feb 17 13:57:18 crc kubenswrapper[4833]: W0217 13:57:18.685525 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode60895e0_eccc_4e5c_adbe_c2286162959e.slice/crio-c19c5d045b0cadb9de21cb7fc26603dfe26d87a4439ae8b37df603b779ea8fbf WatchSource:0}: Error finding container c19c5d045b0cadb9de21cb7fc26603dfe26d87a4439ae8b37df603b779ea8fbf: Status 404 returned error can't find the container with id c19c5d045b0cadb9de21cb7fc26603dfe26d87a4439ae8b37df603b779ea8fbf Feb 17 13:57:18 crc kubenswrapper[4833]: I0217 13:57:18.704967 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/a9cf2a9a-b92f-41f1-a73f-f878ecd4ec66-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-5cfll\" (UID: \"a9cf2a9a-b92f-41f1-a73f-f878ecd4ec66\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-5cfll" Feb 17 13:57:18 crc kubenswrapper[4833]: I0217 13:57:18.709511 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/a9cf2a9a-b92f-41f1-a73f-f878ecd4ec66-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-5cfll\" (UID: \"a9cf2a9a-b92f-41f1-a73f-f878ecd4ec66\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-5cfll" Feb 17 13:57:18 crc kubenswrapper[4833]: I0217 13:57:18.811205 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5f6c5d6894-pl4qb"] Feb 17 13:57:18 crc kubenswrapper[4833]: I0217 13:57:18.814556 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-5cfll" Feb 17 13:57:18 crc kubenswrapper[4833]: W0217 13:57:18.816078 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9c29e17_ba51_4407_8c61_31aa13d8afb4.slice/crio-3fe024dd574b191d1d6f3a0e5083fe28dbaf43dc53dcd21acc9628a6ce346dd8 WatchSource:0}: Error finding container 3fe024dd574b191d1d6f3a0e5083fe28dbaf43dc53dcd21acc9628a6ce346dd8: Status 404 returned error can't find the container with id 3fe024dd574b191d1d6f3a0e5083fe28dbaf43dc53dcd21acc9628a6ce346dd8 Feb 17 13:57:18 crc kubenswrapper[4833]: I0217 13:57:18.824135 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-dz6xg" event={"ID":"575ecda8-5096-4a4b-8bea-7163a6233bc2","Type":"ContainerStarted","Data":"eb476a9e2d7437f3ae001760d9115c137923bd0def48fb2f38981dd589077bdc"} Feb 17 13:57:18 crc kubenswrapper[4833]: I0217 13:57:18.825401 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-jlgbt" event={"ID":"e60895e0-eccc-4e5c-adbe-c2286162959e","Type":"ContainerStarted","Data":"c19c5d045b0cadb9de21cb7fc26603dfe26d87a4439ae8b37df603b779ea8fbf"} Feb 17 13:57:18 crc kubenswrapper[4833]: I0217 13:57:18.826421 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-665f5" event={"ID":"64dfe4ce-e2b1-4005-88af-c8ad6b10f4aa","Type":"ContainerStarted","Data":"5a0468741fa121a13b17db8cfdbf63829a83b4b835028f32aba88160f46cd432"} Feb 17 13:57:18 crc kubenswrapper[4833]: I0217 13:57:18.827290 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5f6c5d6894-pl4qb" event={"ID":"f9c29e17-ba51-4407-8c61-31aa13d8afb4","Type":"ContainerStarted","Data":"3fe024dd574b191d1d6f3a0e5083fe28dbaf43dc53dcd21acc9628a6ce346dd8"} Feb 17 13:57:18 crc kubenswrapper[4833]: I0217 13:57:18.991032 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-5cfll"] Feb 17 13:57:18 crc kubenswrapper[4833]: W0217 13:57:18.999620 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9cf2a9a_b92f_41f1_a73f_f878ecd4ec66.slice/crio-ec8b7dddb27e4f66086c1a0173388604bc91b770d771e169ecbbe0ead600aef6 WatchSource:0}: Error finding container ec8b7dddb27e4f66086c1a0173388604bc91b770d771e169ecbbe0ead600aef6: Status 404 returned error can't find the container with id ec8b7dddb27e4f66086c1a0173388604bc91b770d771e169ecbbe0ead600aef6 Feb 17 13:57:19 crc kubenswrapper[4833]: I0217 13:57:19.834305 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-5cfll" event={"ID":"a9cf2a9a-b92f-41f1-a73f-f878ecd4ec66","Type":"ContainerStarted","Data":"ec8b7dddb27e4f66086c1a0173388604bc91b770d771e169ecbbe0ead600aef6"} Feb 17 13:57:19 crc kubenswrapper[4833]: I0217 13:57:19.836380 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5f6c5d6894-pl4qb" event={"ID":"f9c29e17-ba51-4407-8c61-31aa13d8afb4","Type":"ContainerStarted","Data":"5fd93b796c2a14ac7dedd8163061084c9ce8529dc1fc1e1c51df9af7677e07b8"} Feb 17 13:57:19 crc kubenswrapper[4833]: I0217 13:57:19.850767 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5f6c5d6894-pl4qb" podStartSLOduration=1.850755386 podStartE2EDuration="1.850755386s" podCreationTimestamp="2026-02-17 13:57:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:57:19.850066467 +0000 UTC m=+729.485165900" watchObservedRunningTime="2026-02-17 13:57:19.850755386 +0000 UTC m=+729.485854809" Feb 17 13:57:21 crc kubenswrapper[4833]: I0217 13:57:21.854379 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-dz6xg" event={"ID":"575ecda8-5096-4a4b-8bea-7163a6233bc2","Type":"ContainerStarted","Data":"1d0f35e619b09bafa6bf09caae0f7e719d292fbee060244c20b85b8f9c0f54dd"} Feb 17 13:57:21 crc kubenswrapper[4833]: I0217 13:57:21.854973 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-dz6xg" Feb 17 13:57:21 crc kubenswrapper[4833]: I0217 13:57:21.856385 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-jlgbt" event={"ID":"e60895e0-eccc-4e5c-adbe-c2286162959e","Type":"ContainerStarted","Data":"ef631700ad3c9ba8fdcd6dff325a301a9b07f6a629c83abc5e154afa7b016430"} Feb 17 13:57:21 crc kubenswrapper[4833]: I0217 13:57:21.858203 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-665f5" event={"ID":"64dfe4ce-e2b1-4005-88af-c8ad6b10f4aa","Type":"ContainerStarted","Data":"9b729bfdebc8e6d0a471196ab9e3a7d2e04c2ed4822a19f1b719c5bcf3e88c5d"} Feb 17 13:57:21 crc kubenswrapper[4833]: I0217 13:57:21.873233 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-dz6xg" podStartSLOduration=2.165959606 podStartE2EDuration="4.873215268s" podCreationTimestamp="2026-02-17 13:57:17 +0000 UTC" firstStartedPulling="2026-02-17 13:57:18.289848865 +0000 UTC m=+727.924948298" lastFinishedPulling="2026-02-17 13:57:20.997104527 +0000 UTC m=+730.632203960" observedRunningTime="2026-02-17 13:57:21.868663499 +0000 UTC m=+731.503762942" watchObservedRunningTime="2026-02-17 13:57:21.873215268 +0000 UTC m=+731.508314711" Feb 17 13:57:21 crc kubenswrapper[4833]: I0217 13:57:21.883987 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-665f5" podStartSLOduration=1.4265145879999999 podStartE2EDuration="3.883967224s" podCreationTimestamp="2026-02-17 13:57:18 +0000 UTC" firstStartedPulling="2026-02-17 13:57:18.529554593 +0000 UTC m=+728.164654026" lastFinishedPulling="2026-02-17 13:57:20.987007229 +0000 UTC m=+730.622106662" observedRunningTime="2026-02-17 13:57:21.882169953 +0000 UTC m=+731.517269396" watchObservedRunningTime="2026-02-17 13:57:21.883967224 +0000 UTC m=+731.519066677" Feb 17 13:57:25 crc kubenswrapper[4833]: I0217 13:57:25.912370 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-5cfll" event={"ID":"a9cf2a9a-b92f-41f1-a73f-f878ecd4ec66","Type":"ContainerStarted","Data":"e00a39cad9ade0dae8280911602c7f4f9102fb6d40548157ee150999f246a056"} Feb 17 13:57:25 crc kubenswrapper[4833]: I0217 13:57:25.912857 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-5cfll" Feb 17 13:57:25 crc kubenswrapper[4833]: I0217 13:57:25.916644 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-jlgbt" event={"ID":"e60895e0-eccc-4e5c-adbe-c2286162959e","Type":"ContainerStarted","Data":"938d053d8f38487f6e1371206e88bb062aeeb741c74f68a0a6f16adb71afe541"} Feb 17 13:57:25 crc kubenswrapper[4833]: I0217 13:57:25.935757 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-5cfll" podStartSLOduration=2.607583557 podStartE2EDuration="8.935737973s" podCreationTimestamp="2026-02-17 13:57:17 +0000 UTC" firstStartedPulling="2026-02-17 13:57:19.001761417 +0000 UTC m=+728.636860870" lastFinishedPulling="2026-02-17 13:57:25.329915853 +0000 UTC m=+734.965015286" observedRunningTime="2026-02-17 13:57:25.930830763 +0000 UTC m=+735.565930216" watchObservedRunningTime="2026-02-17 13:57:25.935737973 +0000 UTC m=+735.570837426" Feb 17 13:57:25 crc kubenswrapper[4833]: I0217 13:57:25.955201 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-jlgbt" podStartSLOduration=2.306614682 podStartE2EDuration="8.955178627s" podCreationTimestamp="2026-02-17 13:57:17 +0000 UTC" firstStartedPulling="2026-02-17 13:57:18.689333206 +0000 UTC m=+728.324432639" lastFinishedPulling="2026-02-17 13:57:25.337897151 +0000 UTC m=+734.972996584" observedRunningTime="2026-02-17 13:57:25.952748458 +0000 UTC m=+735.587847891" watchObservedRunningTime="2026-02-17 13:57:25.955178627 +0000 UTC m=+735.590278070" Feb 17 13:57:28 crc kubenswrapper[4833]: I0217 13:57:28.292050 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-dz6xg" Feb 17 13:57:28 crc kubenswrapper[4833]: I0217 13:57:28.608786 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5f6c5d6894-pl4qb" Feb 17 13:57:28 crc kubenswrapper[4833]: I0217 13:57:28.608853 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5f6c5d6894-pl4qb" Feb 17 13:57:28 crc kubenswrapper[4833]: I0217 13:57:28.615258 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5f6c5d6894-pl4qb" Feb 17 13:57:28 crc kubenswrapper[4833]: I0217 13:57:28.941366 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5f6c5d6894-pl4qb" Feb 17 13:57:28 crc kubenswrapper[4833]: I0217 13:57:28.987224 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-xwz2m"] Feb 17 13:57:38 crc kubenswrapper[4833]: I0217 13:57:38.822094 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-5cfll" Feb 17 13:57:44 crc kubenswrapper[4833]: I0217 13:57:44.243951 4833 patch_prober.go:28] interesting pod/machine-config-daemon-nmzvl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 13:57:44 crc kubenswrapper[4833]: I0217 13:57:44.244450 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" podUID="f4a1ca83-1919-4f9c-82de-c849cbd50e70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 13:57:44 crc kubenswrapper[4833]: I0217 13:57:44.244519 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" Feb 17 13:57:44 crc kubenswrapper[4833]: I0217 13:57:44.245161 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2987cca443b50c5381fbff8d4447cf8801984c777d16362c800622301c56c146"} pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 13:57:44 crc kubenswrapper[4833]: I0217 13:57:44.245217 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" podUID="f4a1ca83-1919-4f9c-82de-c849cbd50e70" containerName="machine-config-daemon" containerID="cri-o://2987cca443b50c5381fbff8d4447cf8801984c777d16362c800622301c56c146" gracePeriod=600 Feb 17 13:57:45 crc kubenswrapper[4833]: I0217 13:57:45.057220 4833 generic.go:334] "Generic (PLEG): container finished" podID="f4a1ca83-1919-4f9c-82de-c849cbd50e70" containerID="2987cca443b50c5381fbff8d4447cf8801984c777d16362c800622301c56c146" exitCode=0 Feb 17 13:57:45 crc kubenswrapper[4833]: I0217 13:57:45.057231 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" event={"ID":"f4a1ca83-1919-4f9c-82de-c849cbd50e70","Type":"ContainerDied","Data":"2987cca443b50c5381fbff8d4447cf8801984c777d16362c800622301c56c146"} Feb 17 13:57:45 crc kubenswrapper[4833]: I0217 13:57:45.057701 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" event={"ID":"f4a1ca83-1919-4f9c-82de-c849cbd50e70","Type":"ContainerStarted","Data":"1a5e1a9a5d589559159e0a6c8e522d3462758f7e97a786e9713efff02185b2a7"} Feb 17 13:57:45 crc kubenswrapper[4833]: I0217 13:57:45.057737 4833 scope.go:117] "RemoveContainer" containerID="3ac9e7c6a3aaeefcf86cef8591aaf0874f4e451b15c7e636ff29c4f41608981f" Feb 17 13:57:46 crc kubenswrapper[4833]: I0217 13:57:46.302113 4833 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 17 13:57:53 crc kubenswrapper[4833]: I0217 13:57:53.133025 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kqzvb"] Feb 17 13:57:53 crc kubenswrapper[4833]: I0217 13:57:53.134853 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kqzvb" Feb 17 13:57:53 crc kubenswrapper[4833]: I0217 13:57:53.136805 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 17 13:57:53 crc kubenswrapper[4833]: I0217 13:57:53.165536 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kqzvb"] Feb 17 13:57:53 crc kubenswrapper[4833]: I0217 13:57:53.227777 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n2xd\" (UniqueName: \"kubernetes.io/projected/5dd72ffb-4456-49bc-9dae-ba715f5a12fa-kube-api-access-4n2xd\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kqzvb\" (UID: \"5dd72ffb-4456-49bc-9dae-ba715f5a12fa\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kqzvb" Feb 17 13:57:53 crc kubenswrapper[4833]: I0217 13:57:53.227920 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5dd72ffb-4456-49bc-9dae-ba715f5a12fa-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kqzvb\" (UID: \"5dd72ffb-4456-49bc-9dae-ba715f5a12fa\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kqzvb" Feb 17 13:57:53 crc kubenswrapper[4833]: I0217 13:57:53.228003 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5dd72ffb-4456-49bc-9dae-ba715f5a12fa-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kqzvb\" (UID: \"5dd72ffb-4456-49bc-9dae-ba715f5a12fa\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kqzvb" Feb 17 13:57:53 crc kubenswrapper[4833]: I0217 13:57:53.328589 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4n2xd\" (UniqueName: \"kubernetes.io/projected/5dd72ffb-4456-49bc-9dae-ba715f5a12fa-kube-api-access-4n2xd\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kqzvb\" (UID: \"5dd72ffb-4456-49bc-9dae-ba715f5a12fa\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kqzvb" Feb 17 13:57:53 crc kubenswrapper[4833]: I0217 13:57:53.328908 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5dd72ffb-4456-49bc-9dae-ba715f5a12fa-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kqzvb\" (UID: \"5dd72ffb-4456-49bc-9dae-ba715f5a12fa\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kqzvb" Feb 17 13:57:53 crc kubenswrapper[4833]: I0217 13:57:53.329013 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5dd72ffb-4456-49bc-9dae-ba715f5a12fa-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kqzvb\" (UID: \"5dd72ffb-4456-49bc-9dae-ba715f5a12fa\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kqzvb" Feb 17 13:57:53 crc kubenswrapper[4833]: I0217 13:57:53.329395 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5dd72ffb-4456-49bc-9dae-ba715f5a12fa-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kqzvb\" (UID: \"5dd72ffb-4456-49bc-9dae-ba715f5a12fa\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kqzvb" Feb 17 13:57:53 crc kubenswrapper[4833]: I0217 13:57:53.329453 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5dd72ffb-4456-49bc-9dae-ba715f5a12fa-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kqzvb\" (UID: \"5dd72ffb-4456-49bc-9dae-ba715f5a12fa\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kqzvb" Feb 17 13:57:53 crc kubenswrapper[4833]: I0217 13:57:53.350180 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4n2xd\" (UniqueName: \"kubernetes.io/projected/5dd72ffb-4456-49bc-9dae-ba715f5a12fa-kube-api-access-4n2xd\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kqzvb\" (UID: \"5dd72ffb-4456-49bc-9dae-ba715f5a12fa\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kqzvb" Feb 17 13:57:53 crc kubenswrapper[4833]: I0217 13:57:53.456830 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kqzvb" Feb 17 13:57:53 crc kubenswrapper[4833]: I0217 13:57:53.759921 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kqzvb"] Feb 17 13:57:54 crc kubenswrapper[4833]: I0217 13:57:54.020921 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-xwz2m" podUID="b617c42f-c749-41e5-a305-692a4c631656" containerName="console" containerID="cri-o://ac5d722540308001923896959a864291127c6e8f7d157d23be8d8dc01b7f0d51" gracePeriod=15 Feb 17 13:57:54 crc kubenswrapper[4833]: I0217 13:57:54.113480 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kqzvb" event={"ID":"5dd72ffb-4456-49bc-9dae-ba715f5a12fa","Type":"ContainerStarted","Data":"f4ee54951dd540752ba23d09086c436553f677c577503435dfb29e6dba5e9eb6"} Feb 17 13:57:54 crc kubenswrapper[4833]: I0217 13:57:54.795952 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-xwz2m_b617c42f-c749-41e5-a305-692a4c631656/console/0.log" Feb 17 13:57:54 crc kubenswrapper[4833]: I0217 13:57:54.796229 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-xwz2m" Feb 17 13:57:54 crc kubenswrapper[4833]: I0217 13:57:54.959989 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzqdl\" (UniqueName: \"kubernetes.io/projected/b617c42f-c749-41e5-a305-692a4c631656-kube-api-access-dzqdl\") pod \"b617c42f-c749-41e5-a305-692a4c631656\" (UID: \"b617c42f-c749-41e5-a305-692a4c631656\") " Feb 17 13:57:54 crc kubenswrapper[4833]: I0217 13:57:54.960051 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b617c42f-c749-41e5-a305-692a4c631656-console-oauth-config\") pod \"b617c42f-c749-41e5-a305-692a4c631656\" (UID: \"b617c42f-c749-41e5-a305-692a4c631656\") " Feb 17 13:57:54 crc kubenswrapper[4833]: I0217 13:57:54.960105 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b617c42f-c749-41e5-a305-692a4c631656-console-serving-cert\") pod \"b617c42f-c749-41e5-a305-692a4c631656\" (UID: \"b617c42f-c749-41e5-a305-692a4c631656\") " Feb 17 13:57:54 crc kubenswrapper[4833]: I0217 13:57:54.960127 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b617c42f-c749-41e5-a305-692a4c631656-trusted-ca-bundle\") pod \"b617c42f-c749-41e5-a305-692a4c631656\" (UID: \"b617c42f-c749-41e5-a305-692a4c631656\") " Feb 17 13:57:54 crc kubenswrapper[4833]: I0217 13:57:54.960176 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b617c42f-c749-41e5-a305-692a4c631656-service-ca\") pod \"b617c42f-c749-41e5-a305-692a4c631656\" (UID: \"b617c42f-c749-41e5-a305-692a4c631656\") " Feb 17 13:57:54 crc kubenswrapper[4833]: I0217 13:57:54.960198 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b617c42f-c749-41e5-a305-692a4c631656-oauth-serving-cert\") pod \"b617c42f-c749-41e5-a305-692a4c631656\" (UID: \"b617c42f-c749-41e5-a305-692a4c631656\") " Feb 17 13:57:54 crc kubenswrapper[4833]: I0217 13:57:54.960213 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b617c42f-c749-41e5-a305-692a4c631656-console-config\") pod \"b617c42f-c749-41e5-a305-692a4c631656\" (UID: \"b617c42f-c749-41e5-a305-692a4c631656\") " Feb 17 13:57:54 crc kubenswrapper[4833]: I0217 13:57:54.960790 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b617c42f-c749-41e5-a305-692a4c631656-console-config" (OuterVolumeSpecName: "console-config") pod "b617c42f-c749-41e5-a305-692a4c631656" (UID: "b617c42f-c749-41e5-a305-692a4c631656"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:57:54 crc kubenswrapper[4833]: I0217 13:57:54.960944 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b617c42f-c749-41e5-a305-692a4c631656-service-ca" (OuterVolumeSpecName: "service-ca") pod "b617c42f-c749-41e5-a305-692a4c631656" (UID: "b617c42f-c749-41e5-a305-692a4c631656"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:57:54 crc kubenswrapper[4833]: I0217 13:57:54.961254 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b617c42f-c749-41e5-a305-692a4c631656-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "b617c42f-c749-41e5-a305-692a4c631656" (UID: "b617c42f-c749-41e5-a305-692a4c631656"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:57:54 crc kubenswrapper[4833]: I0217 13:57:54.961279 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b617c42f-c749-41e5-a305-692a4c631656-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "b617c42f-c749-41e5-a305-692a4c631656" (UID: "b617c42f-c749-41e5-a305-692a4c631656"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:57:54 crc kubenswrapper[4833]: I0217 13:57:54.965639 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b617c42f-c749-41e5-a305-692a4c631656-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "b617c42f-c749-41e5-a305-692a4c631656" (UID: "b617c42f-c749-41e5-a305-692a4c631656"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:57:54 crc kubenswrapper[4833]: I0217 13:57:54.965905 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b617c42f-c749-41e5-a305-692a4c631656-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "b617c42f-c749-41e5-a305-692a4c631656" (UID: "b617c42f-c749-41e5-a305-692a4c631656"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:57:54 crc kubenswrapper[4833]: I0217 13:57:54.967389 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b617c42f-c749-41e5-a305-692a4c631656-kube-api-access-dzqdl" (OuterVolumeSpecName: "kube-api-access-dzqdl") pod "b617c42f-c749-41e5-a305-692a4c631656" (UID: "b617c42f-c749-41e5-a305-692a4c631656"). InnerVolumeSpecName "kube-api-access-dzqdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:57:55 crc kubenswrapper[4833]: I0217 13:57:55.061476 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzqdl\" (UniqueName: \"kubernetes.io/projected/b617c42f-c749-41e5-a305-692a4c631656-kube-api-access-dzqdl\") on node \"crc\" DevicePath \"\"" Feb 17 13:57:55 crc kubenswrapper[4833]: I0217 13:57:55.061515 4833 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b617c42f-c749-41e5-a305-692a4c631656-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:57:55 crc kubenswrapper[4833]: I0217 13:57:55.061527 4833 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b617c42f-c749-41e5-a305-692a4c631656-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:57:55 crc kubenswrapper[4833]: I0217 13:57:55.061538 4833 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b617c42f-c749-41e5-a305-692a4c631656-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 13:57:55 crc kubenswrapper[4833]: I0217 13:57:55.061550 4833 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b617c42f-c749-41e5-a305-692a4c631656-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 13:57:55 crc kubenswrapper[4833]: I0217 13:57:55.061561 4833 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b617c42f-c749-41e5-a305-692a4c631656-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:57:55 crc kubenswrapper[4833]: I0217 13:57:55.061572 4833 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b617c42f-c749-41e5-a305-692a4c631656-console-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:57:55 crc kubenswrapper[4833]: I0217 13:57:55.122410 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-xwz2m_b617c42f-c749-41e5-a305-692a4c631656/console/0.log" Feb 17 13:57:55 crc kubenswrapper[4833]: I0217 13:57:55.122475 4833 generic.go:334] "Generic (PLEG): container finished" podID="b617c42f-c749-41e5-a305-692a4c631656" containerID="ac5d722540308001923896959a864291127c6e8f7d157d23be8d8dc01b7f0d51" exitCode=2 Feb 17 13:57:55 crc kubenswrapper[4833]: I0217 13:57:55.122560 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-xwz2m" event={"ID":"b617c42f-c749-41e5-a305-692a4c631656","Type":"ContainerDied","Data":"ac5d722540308001923896959a864291127c6e8f7d157d23be8d8dc01b7f0d51"} Feb 17 13:57:55 crc kubenswrapper[4833]: I0217 13:57:55.122586 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-xwz2m" Feb 17 13:57:55 crc kubenswrapper[4833]: I0217 13:57:55.122616 4833 scope.go:117] "RemoveContainer" containerID="ac5d722540308001923896959a864291127c6e8f7d157d23be8d8dc01b7f0d51" Feb 17 13:57:55 crc kubenswrapper[4833]: I0217 13:57:55.122596 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-xwz2m" event={"ID":"b617c42f-c749-41e5-a305-692a4c631656","Type":"ContainerDied","Data":"ebd1a6d9f48e5fec33b85b8c830eeaaf522672e5f57c55d9a1d1b0144823535c"} Feb 17 13:57:55 crc kubenswrapper[4833]: I0217 13:57:55.124850 4833 generic.go:334] "Generic (PLEG): container finished" podID="5dd72ffb-4456-49bc-9dae-ba715f5a12fa" containerID="53baa949fdaeea7cea958b14aafba58052cdf44006caaa0ecf56f1bd83cd1baa" exitCode=0 Feb 17 13:57:55 crc kubenswrapper[4833]: I0217 13:57:55.124875 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kqzvb" event={"ID":"5dd72ffb-4456-49bc-9dae-ba715f5a12fa","Type":"ContainerDied","Data":"53baa949fdaeea7cea958b14aafba58052cdf44006caaa0ecf56f1bd83cd1baa"} Feb 17 13:57:55 crc kubenswrapper[4833]: I0217 13:57:55.142459 4833 scope.go:117] "RemoveContainer" containerID="ac5d722540308001923896959a864291127c6e8f7d157d23be8d8dc01b7f0d51" Feb 17 13:57:55 crc kubenswrapper[4833]: I0217 13:57:55.142564 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-xwz2m"] Feb 17 13:57:55 crc kubenswrapper[4833]: E0217 13:57:55.142853 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac5d722540308001923896959a864291127c6e8f7d157d23be8d8dc01b7f0d51\": container with ID starting with ac5d722540308001923896959a864291127c6e8f7d157d23be8d8dc01b7f0d51 not found: ID does not exist" containerID="ac5d722540308001923896959a864291127c6e8f7d157d23be8d8dc01b7f0d51" Feb 17 13:57:55 crc kubenswrapper[4833]: I0217 13:57:55.142885 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac5d722540308001923896959a864291127c6e8f7d157d23be8d8dc01b7f0d51"} err="failed to get container status \"ac5d722540308001923896959a864291127c6e8f7d157d23be8d8dc01b7f0d51\": rpc error: code = NotFound desc = could not find container \"ac5d722540308001923896959a864291127c6e8f7d157d23be8d8dc01b7f0d51\": container with ID starting with ac5d722540308001923896959a864291127c6e8f7d157d23be8d8dc01b7f0d51 not found: ID does not exist" Feb 17 13:57:55 crc kubenswrapper[4833]: I0217 13:57:55.149318 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-xwz2m"] Feb 17 13:57:55 crc kubenswrapper[4833]: I0217 13:57:55.457225 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tc8dt"] Feb 17 13:57:55 crc kubenswrapper[4833]: E0217 13:57:55.457473 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b617c42f-c749-41e5-a305-692a4c631656" containerName="console" Feb 17 13:57:55 crc kubenswrapper[4833]: I0217 13:57:55.457488 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="b617c42f-c749-41e5-a305-692a4c631656" containerName="console" Feb 17 13:57:55 crc kubenswrapper[4833]: I0217 13:57:55.457627 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="b617c42f-c749-41e5-a305-692a4c631656" containerName="console" Feb 17 13:57:55 crc kubenswrapper[4833]: I0217 13:57:55.458619 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tc8dt" Feb 17 13:57:55 crc kubenswrapper[4833]: I0217 13:57:55.525136 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tc8dt"] Feb 17 13:57:55 crc kubenswrapper[4833]: I0217 13:57:55.568556 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d32503c9-917e-4066-b28b-ddb951efad67-catalog-content\") pod \"redhat-operators-tc8dt\" (UID: \"d32503c9-917e-4066-b28b-ddb951efad67\") " pod="openshift-marketplace/redhat-operators-tc8dt" Feb 17 13:57:55 crc kubenswrapper[4833]: I0217 13:57:55.568610 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d32503c9-917e-4066-b28b-ddb951efad67-utilities\") pod \"redhat-operators-tc8dt\" (UID: \"d32503c9-917e-4066-b28b-ddb951efad67\") " pod="openshift-marketplace/redhat-operators-tc8dt" Feb 17 13:57:55 crc kubenswrapper[4833]: I0217 13:57:55.568685 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qph6\" (UniqueName: \"kubernetes.io/projected/d32503c9-917e-4066-b28b-ddb951efad67-kube-api-access-5qph6\") pod \"redhat-operators-tc8dt\" (UID: \"d32503c9-917e-4066-b28b-ddb951efad67\") " pod="openshift-marketplace/redhat-operators-tc8dt" Feb 17 13:57:55 crc kubenswrapper[4833]: I0217 13:57:55.669687 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qph6\" (UniqueName: \"kubernetes.io/projected/d32503c9-917e-4066-b28b-ddb951efad67-kube-api-access-5qph6\") pod \"redhat-operators-tc8dt\" (UID: \"d32503c9-917e-4066-b28b-ddb951efad67\") " pod="openshift-marketplace/redhat-operators-tc8dt" Feb 17 13:57:55 crc kubenswrapper[4833]: I0217 13:57:55.669791 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d32503c9-917e-4066-b28b-ddb951efad67-catalog-content\") pod \"redhat-operators-tc8dt\" (UID: \"d32503c9-917e-4066-b28b-ddb951efad67\") " pod="openshift-marketplace/redhat-operators-tc8dt" Feb 17 13:57:55 crc kubenswrapper[4833]: I0217 13:57:55.669821 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d32503c9-917e-4066-b28b-ddb951efad67-utilities\") pod \"redhat-operators-tc8dt\" (UID: \"d32503c9-917e-4066-b28b-ddb951efad67\") " pod="openshift-marketplace/redhat-operators-tc8dt" Feb 17 13:57:55 crc kubenswrapper[4833]: I0217 13:57:55.670252 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d32503c9-917e-4066-b28b-ddb951efad67-utilities\") pod \"redhat-operators-tc8dt\" (UID: \"d32503c9-917e-4066-b28b-ddb951efad67\") " pod="openshift-marketplace/redhat-operators-tc8dt" Feb 17 13:57:55 crc kubenswrapper[4833]: I0217 13:57:55.670449 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d32503c9-917e-4066-b28b-ddb951efad67-catalog-content\") pod \"redhat-operators-tc8dt\" (UID: \"d32503c9-917e-4066-b28b-ddb951efad67\") " pod="openshift-marketplace/redhat-operators-tc8dt" Feb 17 13:57:55 crc kubenswrapper[4833]: I0217 13:57:55.695969 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qph6\" (UniqueName: \"kubernetes.io/projected/d32503c9-917e-4066-b28b-ddb951efad67-kube-api-access-5qph6\") pod \"redhat-operators-tc8dt\" (UID: \"d32503c9-917e-4066-b28b-ddb951efad67\") " pod="openshift-marketplace/redhat-operators-tc8dt" Feb 17 13:57:55 crc kubenswrapper[4833]: I0217 13:57:55.789652 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tc8dt" Feb 17 13:57:55 crc kubenswrapper[4833]: I0217 13:57:55.999720 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tc8dt"] Feb 17 13:57:56 crc kubenswrapper[4833]: W0217 13:57:56.009635 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd32503c9_917e_4066_b28b_ddb951efad67.slice/crio-aa19890f63af4c507b8cba517f72d85f5769609ff62f955519e01952bafb56ab WatchSource:0}: Error finding container aa19890f63af4c507b8cba517f72d85f5769609ff62f955519e01952bafb56ab: Status 404 returned error can't find the container with id aa19890f63af4c507b8cba517f72d85f5769609ff62f955519e01952bafb56ab Feb 17 13:57:56 crc kubenswrapper[4833]: I0217 13:57:56.134469 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tc8dt" event={"ID":"d32503c9-917e-4066-b28b-ddb951efad67","Type":"ContainerStarted","Data":"aa19890f63af4c507b8cba517f72d85f5769609ff62f955519e01952bafb56ab"} Feb 17 13:57:57 crc kubenswrapper[4833]: I0217 13:57:57.051375 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b617c42f-c749-41e5-a305-692a4c631656" path="/var/lib/kubelet/pods/b617c42f-c749-41e5-a305-692a4c631656/volumes" Feb 17 13:57:57 crc kubenswrapper[4833]: I0217 13:57:57.143646 4833 generic.go:334] "Generic (PLEG): container finished" podID="5dd72ffb-4456-49bc-9dae-ba715f5a12fa" containerID="d41772e565f64236630177e0b711fb9a01a380d5f7fa0ceb81c813086f871547" exitCode=0 Feb 17 13:57:57 crc kubenswrapper[4833]: I0217 13:57:57.143712 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kqzvb" event={"ID":"5dd72ffb-4456-49bc-9dae-ba715f5a12fa","Type":"ContainerDied","Data":"d41772e565f64236630177e0b711fb9a01a380d5f7fa0ceb81c813086f871547"} Feb 17 13:57:57 crc kubenswrapper[4833]: I0217 13:57:57.147345 4833 generic.go:334] "Generic (PLEG): container finished" podID="d32503c9-917e-4066-b28b-ddb951efad67" containerID="d898b00f30dd63bc860f4655ee4d5a4a1555d7af6b7d79ed50b8a9da83db2f33" exitCode=0 Feb 17 13:57:57 crc kubenswrapper[4833]: I0217 13:57:57.147402 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tc8dt" event={"ID":"d32503c9-917e-4066-b28b-ddb951efad67","Type":"ContainerDied","Data":"d898b00f30dd63bc860f4655ee4d5a4a1555d7af6b7d79ed50b8a9da83db2f33"} Feb 17 13:57:58 crc kubenswrapper[4833]: I0217 13:57:58.156384 4833 generic.go:334] "Generic (PLEG): container finished" podID="5dd72ffb-4456-49bc-9dae-ba715f5a12fa" containerID="169ce345070ac89defda55f51fb5f574d83bc00b7fb2241b5099782cc9858775" exitCode=0 Feb 17 13:57:58 crc kubenswrapper[4833]: I0217 13:57:58.156455 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kqzvb" event={"ID":"5dd72ffb-4456-49bc-9dae-ba715f5a12fa","Type":"ContainerDied","Data":"169ce345070ac89defda55f51fb5f574d83bc00b7fb2241b5099782cc9858775"} Feb 17 13:57:58 crc kubenswrapper[4833]: I0217 13:57:58.159122 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tc8dt" event={"ID":"d32503c9-917e-4066-b28b-ddb951efad67","Type":"ContainerStarted","Data":"099c79d33c6b96a4343ee23ac33ef571324de7c8a5e8aa5056023a0f8d9124eb"} Feb 17 13:57:59 crc kubenswrapper[4833]: I0217 13:57:59.167815 4833 generic.go:334] "Generic (PLEG): container finished" podID="d32503c9-917e-4066-b28b-ddb951efad67" containerID="099c79d33c6b96a4343ee23ac33ef571324de7c8a5e8aa5056023a0f8d9124eb" exitCode=0 Feb 17 13:57:59 crc kubenswrapper[4833]: I0217 13:57:59.167857 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tc8dt" event={"ID":"d32503c9-917e-4066-b28b-ddb951efad67","Type":"ContainerDied","Data":"099c79d33c6b96a4343ee23ac33ef571324de7c8a5e8aa5056023a0f8d9124eb"} Feb 17 13:57:59 crc kubenswrapper[4833]: I0217 13:57:59.434200 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kqzvb" Feb 17 13:57:59 crc kubenswrapper[4833]: I0217 13:57:59.517745 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5dd72ffb-4456-49bc-9dae-ba715f5a12fa-bundle\") pod \"5dd72ffb-4456-49bc-9dae-ba715f5a12fa\" (UID: \"5dd72ffb-4456-49bc-9dae-ba715f5a12fa\") " Feb 17 13:57:59 crc kubenswrapper[4833]: I0217 13:57:59.517809 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4n2xd\" (UniqueName: \"kubernetes.io/projected/5dd72ffb-4456-49bc-9dae-ba715f5a12fa-kube-api-access-4n2xd\") pod \"5dd72ffb-4456-49bc-9dae-ba715f5a12fa\" (UID: \"5dd72ffb-4456-49bc-9dae-ba715f5a12fa\") " Feb 17 13:57:59 crc kubenswrapper[4833]: I0217 13:57:59.517886 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5dd72ffb-4456-49bc-9dae-ba715f5a12fa-util\") pod \"5dd72ffb-4456-49bc-9dae-ba715f5a12fa\" (UID: \"5dd72ffb-4456-49bc-9dae-ba715f5a12fa\") " Feb 17 13:57:59 crc kubenswrapper[4833]: I0217 13:57:59.519026 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5dd72ffb-4456-49bc-9dae-ba715f5a12fa-bundle" (OuterVolumeSpecName: "bundle") pod "5dd72ffb-4456-49bc-9dae-ba715f5a12fa" (UID: "5dd72ffb-4456-49bc-9dae-ba715f5a12fa"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:57:59 crc kubenswrapper[4833]: I0217 13:57:59.527200 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5dd72ffb-4456-49bc-9dae-ba715f5a12fa-kube-api-access-4n2xd" (OuterVolumeSpecName: "kube-api-access-4n2xd") pod "5dd72ffb-4456-49bc-9dae-ba715f5a12fa" (UID: "5dd72ffb-4456-49bc-9dae-ba715f5a12fa"). InnerVolumeSpecName "kube-api-access-4n2xd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:57:59 crc kubenswrapper[4833]: I0217 13:57:59.533382 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5dd72ffb-4456-49bc-9dae-ba715f5a12fa-util" (OuterVolumeSpecName: "util") pod "5dd72ffb-4456-49bc-9dae-ba715f5a12fa" (UID: "5dd72ffb-4456-49bc-9dae-ba715f5a12fa"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:57:59 crc kubenswrapper[4833]: I0217 13:57:59.619307 4833 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5dd72ffb-4456-49bc-9dae-ba715f5a12fa-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 13:57:59 crc kubenswrapper[4833]: I0217 13:57:59.619353 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4n2xd\" (UniqueName: \"kubernetes.io/projected/5dd72ffb-4456-49bc-9dae-ba715f5a12fa-kube-api-access-4n2xd\") on node \"crc\" DevicePath \"\"" Feb 17 13:57:59 crc kubenswrapper[4833]: I0217 13:57:59.619373 4833 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5dd72ffb-4456-49bc-9dae-ba715f5a12fa-util\") on node \"crc\" DevicePath \"\"" Feb 17 13:58:00 crc kubenswrapper[4833]: I0217 13:58:00.178317 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kqzvb" event={"ID":"5dd72ffb-4456-49bc-9dae-ba715f5a12fa","Type":"ContainerDied","Data":"f4ee54951dd540752ba23d09086c436553f677c577503435dfb29e6dba5e9eb6"} Feb 17 13:58:00 crc kubenswrapper[4833]: I0217 13:58:00.178365 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4ee54951dd540752ba23d09086c436553f677c577503435dfb29e6dba5e9eb6" Feb 17 13:58:00 crc kubenswrapper[4833]: I0217 13:58:00.178414 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kqzvb" Feb 17 13:58:01 crc kubenswrapper[4833]: I0217 13:58:01.186002 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tc8dt" event={"ID":"d32503c9-917e-4066-b28b-ddb951efad67","Type":"ContainerStarted","Data":"90c83a91a73403fa5e04fd6a2562cc2fe51150e21aa3bf285c887b534d02027b"} Feb 17 13:58:01 crc kubenswrapper[4833]: I0217 13:58:01.208769 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tc8dt" podStartSLOduration=2.771900725 podStartE2EDuration="6.208742204s" podCreationTimestamp="2026-02-17 13:57:55 +0000 UTC" firstStartedPulling="2026-02-17 13:57:57.149164173 +0000 UTC m=+766.784263606" lastFinishedPulling="2026-02-17 13:58:00.586005662 +0000 UTC m=+770.221105085" observedRunningTime="2026-02-17 13:58:01.207769957 +0000 UTC m=+770.842869480" watchObservedRunningTime="2026-02-17 13:58:01.208742204 +0000 UTC m=+770.843841677" Feb 17 13:58:05 crc kubenswrapper[4833]: I0217 13:58:05.790801 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tc8dt" Feb 17 13:58:05 crc kubenswrapper[4833]: I0217 13:58:05.791765 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tc8dt" Feb 17 13:58:06 crc kubenswrapper[4833]: I0217 13:58:06.837747 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tc8dt" podUID="d32503c9-917e-4066-b28b-ddb951efad67" containerName="registry-server" probeResult="failure" output=< Feb 17 13:58:06 crc kubenswrapper[4833]: timeout: failed to connect service ":50051" within 1s Feb 17 13:58:06 crc kubenswrapper[4833]: > Feb 17 13:58:10 crc kubenswrapper[4833]: I0217 13:58:10.078104 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-55cdbb5474-tkm6p"] Feb 17 13:58:10 crc kubenswrapper[4833]: E0217 13:58:10.078572 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dd72ffb-4456-49bc-9dae-ba715f5a12fa" containerName="pull" Feb 17 13:58:10 crc kubenswrapper[4833]: I0217 13:58:10.078584 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dd72ffb-4456-49bc-9dae-ba715f5a12fa" containerName="pull" Feb 17 13:58:10 crc kubenswrapper[4833]: E0217 13:58:10.078607 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dd72ffb-4456-49bc-9dae-ba715f5a12fa" containerName="util" Feb 17 13:58:10 crc kubenswrapper[4833]: I0217 13:58:10.078613 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dd72ffb-4456-49bc-9dae-ba715f5a12fa" containerName="util" Feb 17 13:58:10 crc kubenswrapper[4833]: E0217 13:58:10.078625 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dd72ffb-4456-49bc-9dae-ba715f5a12fa" containerName="extract" Feb 17 13:58:10 crc kubenswrapper[4833]: I0217 13:58:10.078631 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dd72ffb-4456-49bc-9dae-ba715f5a12fa" containerName="extract" Feb 17 13:58:10 crc kubenswrapper[4833]: I0217 13:58:10.078721 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="5dd72ffb-4456-49bc-9dae-ba715f5a12fa" containerName="extract" Feb 17 13:58:10 crc kubenswrapper[4833]: I0217 13:58:10.079117 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-55cdbb5474-tkm6p" Feb 17 13:58:10 crc kubenswrapper[4833]: I0217 13:58:10.080983 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 17 13:58:10 crc kubenswrapper[4833]: I0217 13:58:10.081173 4833 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 17 13:58:10 crc kubenswrapper[4833]: I0217 13:58:10.081294 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 17 13:58:10 crc kubenswrapper[4833]: I0217 13:58:10.081821 4833 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-m4c9m" Feb 17 13:58:10 crc kubenswrapper[4833]: I0217 13:58:10.081835 4833 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 17 13:58:10 crc kubenswrapper[4833]: I0217 13:58:10.116466 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-55cdbb5474-tkm6p"] Feb 17 13:58:10 crc kubenswrapper[4833]: I0217 13:58:10.253904 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/43cee1ba-407b-420e-985d-3cc8806c0092-apiservice-cert\") pod \"metallb-operator-controller-manager-55cdbb5474-tkm6p\" (UID: \"43cee1ba-407b-420e-985d-3cc8806c0092\") " pod="metallb-system/metallb-operator-controller-manager-55cdbb5474-tkm6p" Feb 17 13:58:10 crc kubenswrapper[4833]: I0217 13:58:10.253982 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57d7p\" (UniqueName: \"kubernetes.io/projected/43cee1ba-407b-420e-985d-3cc8806c0092-kube-api-access-57d7p\") pod \"metallb-operator-controller-manager-55cdbb5474-tkm6p\" (UID: \"43cee1ba-407b-420e-985d-3cc8806c0092\") " pod="metallb-system/metallb-operator-controller-manager-55cdbb5474-tkm6p" Feb 17 13:58:10 crc kubenswrapper[4833]: I0217 13:58:10.254028 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/43cee1ba-407b-420e-985d-3cc8806c0092-webhook-cert\") pod \"metallb-operator-controller-manager-55cdbb5474-tkm6p\" (UID: \"43cee1ba-407b-420e-985d-3cc8806c0092\") " pod="metallb-system/metallb-operator-controller-manager-55cdbb5474-tkm6p" Feb 17 13:58:10 crc kubenswrapper[4833]: I0217 13:58:10.356697 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57d7p\" (UniqueName: \"kubernetes.io/projected/43cee1ba-407b-420e-985d-3cc8806c0092-kube-api-access-57d7p\") pod \"metallb-operator-controller-manager-55cdbb5474-tkm6p\" (UID: \"43cee1ba-407b-420e-985d-3cc8806c0092\") " pod="metallb-system/metallb-operator-controller-manager-55cdbb5474-tkm6p" Feb 17 13:58:10 crc kubenswrapper[4833]: I0217 13:58:10.356955 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/43cee1ba-407b-420e-985d-3cc8806c0092-webhook-cert\") pod \"metallb-operator-controller-manager-55cdbb5474-tkm6p\" (UID: \"43cee1ba-407b-420e-985d-3cc8806c0092\") " pod="metallb-system/metallb-operator-controller-manager-55cdbb5474-tkm6p" Feb 17 13:58:10 crc kubenswrapper[4833]: I0217 13:58:10.357014 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/43cee1ba-407b-420e-985d-3cc8806c0092-apiservice-cert\") pod \"metallb-operator-controller-manager-55cdbb5474-tkm6p\" (UID: \"43cee1ba-407b-420e-985d-3cc8806c0092\") " pod="metallb-system/metallb-operator-controller-manager-55cdbb5474-tkm6p" Feb 17 13:58:10 crc kubenswrapper[4833]: I0217 13:58:10.363826 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/43cee1ba-407b-420e-985d-3cc8806c0092-apiservice-cert\") pod \"metallb-operator-controller-manager-55cdbb5474-tkm6p\" (UID: \"43cee1ba-407b-420e-985d-3cc8806c0092\") " pod="metallb-system/metallb-operator-controller-manager-55cdbb5474-tkm6p" Feb 17 13:58:10 crc kubenswrapper[4833]: I0217 13:58:10.386183 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57d7p\" (UniqueName: \"kubernetes.io/projected/43cee1ba-407b-420e-985d-3cc8806c0092-kube-api-access-57d7p\") pod \"metallb-operator-controller-manager-55cdbb5474-tkm6p\" (UID: \"43cee1ba-407b-420e-985d-3cc8806c0092\") " pod="metallb-system/metallb-operator-controller-manager-55cdbb5474-tkm6p" Feb 17 13:58:10 crc kubenswrapper[4833]: I0217 13:58:10.387582 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/43cee1ba-407b-420e-985d-3cc8806c0092-webhook-cert\") pod \"metallb-operator-controller-manager-55cdbb5474-tkm6p\" (UID: \"43cee1ba-407b-420e-985d-3cc8806c0092\") " pod="metallb-system/metallb-operator-controller-manager-55cdbb5474-tkm6p" Feb 17 13:58:10 crc kubenswrapper[4833]: I0217 13:58:10.397916 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-55cdbb5474-tkm6p" Feb 17 13:58:10 crc kubenswrapper[4833]: I0217 13:58:10.405539 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-6cbf77cb9b-nv8ks"] Feb 17 13:58:10 crc kubenswrapper[4833]: I0217 13:58:10.406327 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6cbf77cb9b-nv8ks" Feb 17 13:58:10 crc kubenswrapper[4833]: I0217 13:58:10.412749 4833 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-xqtzd" Feb 17 13:58:10 crc kubenswrapper[4833]: I0217 13:58:10.412987 4833 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 17 13:58:10 crc kubenswrapper[4833]: I0217 13:58:10.413187 4833 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 17 13:58:10 crc kubenswrapper[4833]: I0217 13:58:10.423744 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6cbf77cb9b-nv8ks"] Feb 17 13:58:10 crc kubenswrapper[4833]: I0217 13:58:10.560582 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/266df56f-825e-4e4e-89e2-90deb0c2fa79-webhook-cert\") pod \"metallb-operator-webhook-server-6cbf77cb9b-nv8ks\" (UID: \"266df56f-825e-4e4e-89e2-90deb0c2fa79\") " pod="metallb-system/metallb-operator-webhook-server-6cbf77cb9b-nv8ks" Feb 17 13:58:10 crc kubenswrapper[4833]: I0217 13:58:10.560974 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqmkb\" (UniqueName: \"kubernetes.io/projected/266df56f-825e-4e4e-89e2-90deb0c2fa79-kube-api-access-xqmkb\") pod \"metallb-operator-webhook-server-6cbf77cb9b-nv8ks\" (UID: \"266df56f-825e-4e4e-89e2-90deb0c2fa79\") " pod="metallb-system/metallb-operator-webhook-server-6cbf77cb9b-nv8ks" Feb 17 13:58:10 crc kubenswrapper[4833]: I0217 13:58:10.561000 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/266df56f-825e-4e4e-89e2-90deb0c2fa79-apiservice-cert\") pod \"metallb-operator-webhook-server-6cbf77cb9b-nv8ks\" (UID: \"266df56f-825e-4e4e-89e2-90deb0c2fa79\") " pod="metallb-system/metallb-operator-webhook-server-6cbf77cb9b-nv8ks" Feb 17 13:58:10 crc kubenswrapper[4833]: I0217 13:58:10.661927 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqmkb\" (UniqueName: \"kubernetes.io/projected/266df56f-825e-4e4e-89e2-90deb0c2fa79-kube-api-access-xqmkb\") pod \"metallb-operator-webhook-server-6cbf77cb9b-nv8ks\" (UID: \"266df56f-825e-4e4e-89e2-90deb0c2fa79\") " pod="metallb-system/metallb-operator-webhook-server-6cbf77cb9b-nv8ks" Feb 17 13:58:10 crc kubenswrapper[4833]: I0217 13:58:10.661990 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/266df56f-825e-4e4e-89e2-90deb0c2fa79-apiservice-cert\") pod \"metallb-operator-webhook-server-6cbf77cb9b-nv8ks\" (UID: \"266df56f-825e-4e4e-89e2-90deb0c2fa79\") " pod="metallb-system/metallb-operator-webhook-server-6cbf77cb9b-nv8ks" Feb 17 13:58:10 crc kubenswrapper[4833]: I0217 13:58:10.662060 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/266df56f-825e-4e4e-89e2-90deb0c2fa79-webhook-cert\") pod \"metallb-operator-webhook-server-6cbf77cb9b-nv8ks\" (UID: \"266df56f-825e-4e4e-89e2-90deb0c2fa79\") " pod="metallb-system/metallb-operator-webhook-server-6cbf77cb9b-nv8ks" Feb 17 13:58:10 crc kubenswrapper[4833]: I0217 13:58:10.666068 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/266df56f-825e-4e4e-89e2-90deb0c2fa79-webhook-cert\") pod \"metallb-operator-webhook-server-6cbf77cb9b-nv8ks\" (UID: \"266df56f-825e-4e4e-89e2-90deb0c2fa79\") " pod="metallb-system/metallb-operator-webhook-server-6cbf77cb9b-nv8ks" Feb 17 13:58:10 crc kubenswrapper[4833]: I0217 13:58:10.672781 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/266df56f-825e-4e4e-89e2-90deb0c2fa79-apiservice-cert\") pod \"metallb-operator-webhook-server-6cbf77cb9b-nv8ks\" (UID: \"266df56f-825e-4e4e-89e2-90deb0c2fa79\") " pod="metallb-system/metallb-operator-webhook-server-6cbf77cb9b-nv8ks" Feb 17 13:58:10 crc kubenswrapper[4833]: I0217 13:58:10.681882 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqmkb\" (UniqueName: \"kubernetes.io/projected/266df56f-825e-4e4e-89e2-90deb0c2fa79-kube-api-access-xqmkb\") pod \"metallb-operator-webhook-server-6cbf77cb9b-nv8ks\" (UID: \"266df56f-825e-4e4e-89e2-90deb0c2fa79\") " pod="metallb-system/metallb-operator-webhook-server-6cbf77cb9b-nv8ks" Feb 17 13:58:10 crc kubenswrapper[4833]: I0217 13:58:10.784216 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6cbf77cb9b-nv8ks" Feb 17 13:58:10 crc kubenswrapper[4833]: I0217 13:58:10.828496 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-55cdbb5474-tkm6p"] Feb 17 13:58:10 crc kubenswrapper[4833]: W0217 13:58:10.840610 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod43cee1ba_407b_420e_985d_3cc8806c0092.slice/crio-037feac8b956fd5dc98f742a136de2ade22dcf490592a88787693332ef828418 WatchSource:0}: Error finding container 037feac8b956fd5dc98f742a136de2ade22dcf490592a88787693332ef828418: Status 404 returned error can't find the container with id 037feac8b956fd5dc98f742a136de2ade22dcf490592a88787693332ef828418 Feb 17 13:58:11 crc kubenswrapper[4833]: I0217 13:58:11.041337 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6cbf77cb9b-nv8ks"] Feb 17 13:58:11 crc kubenswrapper[4833]: I0217 13:58:11.260812 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-55cdbb5474-tkm6p" event={"ID":"43cee1ba-407b-420e-985d-3cc8806c0092","Type":"ContainerStarted","Data":"037feac8b956fd5dc98f742a136de2ade22dcf490592a88787693332ef828418"} Feb 17 13:58:11 crc kubenswrapper[4833]: I0217 13:58:11.266776 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6cbf77cb9b-nv8ks" event={"ID":"266df56f-825e-4e4e-89e2-90deb0c2fa79","Type":"ContainerStarted","Data":"f29319c7407b95a7ac85ecf16cd19a48f7b7895b90c8b45a289b4e429a542cdb"} Feb 17 13:58:14 crc kubenswrapper[4833]: I0217 13:58:14.288393 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-55cdbb5474-tkm6p" event={"ID":"43cee1ba-407b-420e-985d-3cc8806c0092","Type":"ContainerStarted","Data":"afeb45e39d89e1515373e83502282c36732a85a34931f7d53789bc139bfe7fe5"} Feb 17 13:58:14 crc kubenswrapper[4833]: I0217 13:58:14.289528 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-55cdbb5474-tkm6p" Feb 17 13:58:14 crc kubenswrapper[4833]: I0217 13:58:14.306110 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-55cdbb5474-tkm6p" podStartSLOduration=1.236351065 podStartE2EDuration="4.306097234s" podCreationTimestamp="2026-02-17 13:58:10 +0000 UTC" firstStartedPulling="2026-02-17 13:58:10.855264358 +0000 UTC m=+780.490363791" lastFinishedPulling="2026-02-17 13:58:13.925010527 +0000 UTC m=+783.560109960" observedRunningTime="2026-02-17 13:58:14.303442689 +0000 UTC m=+783.938542122" watchObservedRunningTime="2026-02-17 13:58:14.306097234 +0000 UTC m=+783.941196657" Feb 17 13:58:16 crc kubenswrapper[4833]: I0217 13:58:16.145772 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tc8dt" Feb 17 13:58:16 crc kubenswrapper[4833]: I0217 13:58:16.198665 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tc8dt" Feb 17 13:58:17 crc kubenswrapper[4833]: I0217 13:58:17.050005 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tc8dt"] Feb 17 13:58:17 crc kubenswrapper[4833]: I0217 13:58:17.309695 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6cbf77cb9b-nv8ks" event={"ID":"266df56f-825e-4e4e-89e2-90deb0c2fa79","Type":"ContainerStarted","Data":"dd9ac330522c3f0ea69cca71a794bd1961ec89dceb3fc0f4db6b4613439ab865"} Feb 17 13:58:17 crc kubenswrapper[4833]: I0217 13:58:17.309943 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tc8dt" podUID="d32503c9-917e-4066-b28b-ddb951efad67" containerName="registry-server" containerID="cri-o://90c83a91a73403fa5e04fd6a2562cc2fe51150e21aa3bf285c887b534d02027b" gracePeriod=2 Feb 17 13:58:17 crc kubenswrapper[4833]: I0217 13:58:17.346282 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-6cbf77cb9b-nv8ks" podStartSLOduration=1.686056166 podStartE2EDuration="7.346256156s" podCreationTimestamp="2026-02-17 13:58:10 +0000 UTC" firstStartedPulling="2026-02-17 13:58:11.058929803 +0000 UTC m=+780.694029236" lastFinishedPulling="2026-02-17 13:58:16.719129793 +0000 UTC m=+786.354229226" observedRunningTime="2026-02-17 13:58:17.339585247 +0000 UTC m=+786.974684690" watchObservedRunningTime="2026-02-17 13:58:17.346256156 +0000 UTC m=+786.981355609" Feb 17 13:58:17 crc kubenswrapper[4833]: I0217 13:58:17.732825 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tc8dt" Feb 17 13:58:17 crc kubenswrapper[4833]: I0217 13:58:17.862247 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qph6\" (UniqueName: \"kubernetes.io/projected/d32503c9-917e-4066-b28b-ddb951efad67-kube-api-access-5qph6\") pod \"d32503c9-917e-4066-b28b-ddb951efad67\" (UID: \"d32503c9-917e-4066-b28b-ddb951efad67\") " Feb 17 13:58:17 crc kubenswrapper[4833]: I0217 13:58:17.862368 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d32503c9-917e-4066-b28b-ddb951efad67-catalog-content\") pod \"d32503c9-917e-4066-b28b-ddb951efad67\" (UID: \"d32503c9-917e-4066-b28b-ddb951efad67\") " Feb 17 13:58:17 crc kubenswrapper[4833]: I0217 13:58:17.862454 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d32503c9-917e-4066-b28b-ddb951efad67-utilities\") pod \"d32503c9-917e-4066-b28b-ddb951efad67\" (UID: \"d32503c9-917e-4066-b28b-ddb951efad67\") " Feb 17 13:58:17 crc kubenswrapper[4833]: I0217 13:58:17.863547 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d32503c9-917e-4066-b28b-ddb951efad67-utilities" (OuterVolumeSpecName: "utilities") pod "d32503c9-917e-4066-b28b-ddb951efad67" (UID: "d32503c9-917e-4066-b28b-ddb951efad67"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:58:17 crc kubenswrapper[4833]: I0217 13:58:17.868391 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d32503c9-917e-4066-b28b-ddb951efad67-kube-api-access-5qph6" (OuterVolumeSpecName: "kube-api-access-5qph6") pod "d32503c9-917e-4066-b28b-ddb951efad67" (UID: "d32503c9-917e-4066-b28b-ddb951efad67"). InnerVolumeSpecName "kube-api-access-5qph6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:58:17 crc kubenswrapper[4833]: I0217 13:58:17.964168 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d32503c9-917e-4066-b28b-ddb951efad67-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 13:58:17 crc kubenswrapper[4833]: I0217 13:58:17.964201 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qph6\" (UniqueName: \"kubernetes.io/projected/d32503c9-917e-4066-b28b-ddb951efad67-kube-api-access-5qph6\") on node \"crc\" DevicePath \"\"" Feb 17 13:58:17 crc kubenswrapper[4833]: I0217 13:58:17.992722 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d32503c9-917e-4066-b28b-ddb951efad67-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d32503c9-917e-4066-b28b-ddb951efad67" (UID: "d32503c9-917e-4066-b28b-ddb951efad67"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:58:18 crc kubenswrapper[4833]: I0217 13:58:18.065310 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d32503c9-917e-4066-b28b-ddb951efad67-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 13:58:18 crc kubenswrapper[4833]: I0217 13:58:18.318705 4833 generic.go:334] "Generic (PLEG): container finished" podID="d32503c9-917e-4066-b28b-ddb951efad67" containerID="90c83a91a73403fa5e04fd6a2562cc2fe51150e21aa3bf285c887b534d02027b" exitCode=0 Feb 17 13:58:18 crc kubenswrapper[4833]: I0217 13:58:18.318749 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tc8dt" event={"ID":"d32503c9-917e-4066-b28b-ddb951efad67","Type":"ContainerDied","Data":"90c83a91a73403fa5e04fd6a2562cc2fe51150e21aa3bf285c887b534d02027b"} Feb 17 13:58:18 crc kubenswrapper[4833]: I0217 13:58:18.318824 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tc8dt" event={"ID":"d32503c9-917e-4066-b28b-ddb951efad67","Type":"ContainerDied","Data":"aa19890f63af4c507b8cba517f72d85f5769609ff62f955519e01952bafb56ab"} Feb 17 13:58:18 crc kubenswrapper[4833]: I0217 13:58:18.318829 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tc8dt" Feb 17 13:58:18 crc kubenswrapper[4833]: I0217 13:58:18.318848 4833 scope.go:117] "RemoveContainer" containerID="90c83a91a73403fa5e04fd6a2562cc2fe51150e21aa3bf285c887b534d02027b" Feb 17 13:58:18 crc kubenswrapper[4833]: I0217 13:58:18.319417 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6cbf77cb9b-nv8ks" Feb 17 13:58:18 crc kubenswrapper[4833]: I0217 13:58:18.337275 4833 scope.go:117] "RemoveContainer" containerID="099c79d33c6b96a4343ee23ac33ef571324de7c8a5e8aa5056023a0f8d9124eb" Feb 17 13:58:18 crc kubenswrapper[4833]: I0217 13:58:18.344392 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tc8dt"] Feb 17 13:58:18 crc kubenswrapper[4833]: I0217 13:58:18.351005 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tc8dt"] Feb 17 13:58:18 crc kubenswrapper[4833]: I0217 13:58:18.368797 4833 scope.go:117] "RemoveContainer" containerID="d898b00f30dd63bc860f4655ee4d5a4a1555d7af6b7d79ed50b8a9da83db2f33" Feb 17 13:58:18 crc kubenswrapper[4833]: I0217 13:58:18.389571 4833 scope.go:117] "RemoveContainer" containerID="90c83a91a73403fa5e04fd6a2562cc2fe51150e21aa3bf285c887b534d02027b" Feb 17 13:58:18 crc kubenswrapper[4833]: E0217 13:58:18.394229 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90c83a91a73403fa5e04fd6a2562cc2fe51150e21aa3bf285c887b534d02027b\": container with ID starting with 90c83a91a73403fa5e04fd6a2562cc2fe51150e21aa3bf285c887b534d02027b not found: ID does not exist" containerID="90c83a91a73403fa5e04fd6a2562cc2fe51150e21aa3bf285c887b534d02027b" Feb 17 13:58:18 crc kubenswrapper[4833]: I0217 13:58:18.394311 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90c83a91a73403fa5e04fd6a2562cc2fe51150e21aa3bf285c887b534d02027b"} err="failed to get container status \"90c83a91a73403fa5e04fd6a2562cc2fe51150e21aa3bf285c887b534d02027b\": rpc error: code = NotFound desc = could not find container \"90c83a91a73403fa5e04fd6a2562cc2fe51150e21aa3bf285c887b534d02027b\": container with ID starting with 90c83a91a73403fa5e04fd6a2562cc2fe51150e21aa3bf285c887b534d02027b not found: ID does not exist" Feb 17 13:58:18 crc kubenswrapper[4833]: I0217 13:58:18.394339 4833 scope.go:117] "RemoveContainer" containerID="099c79d33c6b96a4343ee23ac33ef571324de7c8a5e8aa5056023a0f8d9124eb" Feb 17 13:58:18 crc kubenswrapper[4833]: E0217 13:58:18.394896 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"099c79d33c6b96a4343ee23ac33ef571324de7c8a5e8aa5056023a0f8d9124eb\": container with ID starting with 099c79d33c6b96a4343ee23ac33ef571324de7c8a5e8aa5056023a0f8d9124eb not found: ID does not exist" containerID="099c79d33c6b96a4343ee23ac33ef571324de7c8a5e8aa5056023a0f8d9124eb" Feb 17 13:58:18 crc kubenswrapper[4833]: I0217 13:58:18.394943 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"099c79d33c6b96a4343ee23ac33ef571324de7c8a5e8aa5056023a0f8d9124eb"} err="failed to get container status \"099c79d33c6b96a4343ee23ac33ef571324de7c8a5e8aa5056023a0f8d9124eb\": rpc error: code = NotFound desc = could not find container \"099c79d33c6b96a4343ee23ac33ef571324de7c8a5e8aa5056023a0f8d9124eb\": container with ID starting with 099c79d33c6b96a4343ee23ac33ef571324de7c8a5e8aa5056023a0f8d9124eb not found: ID does not exist" Feb 17 13:58:18 crc kubenswrapper[4833]: I0217 13:58:18.394976 4833 scope.go:117] "RemoveContainer" containerID="d898b00f30dd63bc860f4655ee4d5a4a1555d7af6b7d79ed50b8a9da83db2f33" Feb 17 13:58:18 crc kubenswrapper[4833]: E0217 13:58:18.395649 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d898b00f30dd63bc860f4655ee4d5a4a1555d7af6b7d79ed50b8a9da83db2f33\": container with ID starting with d898b00f30dd63bc860f4655ee4d5a4a1555d7af6b7d79ed50b8a9da83db2f33 not found: ID does not exist" containerID="d898b00f30dd63bc860f4655ee4d5a4a1555d7af6b7d79ed50b8a9da83db2f33" Feb 17 13:58:18 crc kubenswrapper[4833]: I0217 13:58:18.395683 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d898b00f30dd63bc860f4655ee4d5a4a1555d7af6b7d79ed50b8a9da83db2f33"} err="failed to get container status \"d898b00f30dd63bc860f4655ee4d5a4a1555d7af6b7d79ed50b8a9da83db2f33\": rpc error: code = NotFound desc = could not find container \"d898b00f30dd63bc860f4655ee4d5a4a1555d7af6b7d79ed50b8a9da83db2f33\": container with ID starting with d898b00f30dd63bc860f4655ee4d5a4a1555d7af6b7d79ed50b8a9da83db2f33 not found: ID does not exist" Feb 17 13:58:19 crc kubenswrapper[4833]: I0217 13:58:19.051102 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d32503c9-917e-4066-b28b-ddb951efad67" path="/var/lib/kubelet/pods/d32503c9-917e-4066-b28b-ddb951efad67/volumes" Feb 17 13:58:30 crc kubenswrapper[4833]: I0217 13:58:30.790942 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-6cbf77cb9b-nv8ks" Feb 17 13:58:50 crc kubenswrapper[4833]: I0217 13:58:50.401430 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-55cdbb5474-tkm6p" Feb 17 13:58:51 crc kubenswrapper[4833]: I0217 13:58:51.226429 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-8wwr4"] Feb 17 13:58:51 crc kubenswrapper[4833]: E0217 13:58:51.226970 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d32503c9-917e-4066-b28b-ddb951efad67" containerName="registry-server" Feb 17 13:58:51 crc kubenswrapper[4833]: I0217 13:58:51.226984 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="d32503c9-917e-4066-b28b-ddb951efad67" containerName="registry-server" Feb 17 13:58:51 crc kubenswrapper[4833]: E0217 13:58:51.227002 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d32503c9-917e-4066-b28b-ddb951efad67" containerName="extract-content" Feb 17 13:58:51 crc kubenswrapper[4833]: I0217 13:58:51.227010 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="d32503c9-917e-4066-b28b-ddb951efad67" containerName="extract-content" Feb 17 13:58:51 crc kubenswrapper[4833]: E0217 13:58:51.227029 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d32503c9-917e-4066-b28b-ddb951efad67" containerName="extract-utilities" Feb 17 13:58:51 crc kubenswrapper[4833]: I0217 13:58:51.227056 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="d32503c9-917e-4066-b28b-ddb951efad67" containerName="extract-utilities" Feb 17 13:58:51 crc kubenswrapper[4833]: I0217 13:58:51.227182 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="d32503c9-917e-4066-b28b-ddb951efad67" containerName="registry-server" Feb 17 13:58:51 crc kubenswrapper[4833]: I0217 13:58:51.227644 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-8wwr4" Feb 17 13:58:51 crc kubenswrapper[4833]: I0217 13:58:51.230528 4833 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-5d8h6" Feb 17 13:58:51 crc kubenswrapper[4833]: I0217 13:58:51.230567 4833 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 17 13:58:51 crc kubenswrapper[4833]: I0217 13:58:51.230969 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-ls8rs"] Feb 17 13:58:51 crc kubenswrapper[4833]: I0217 13:58:51.233677 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-ls8rs" Feb 17 13:58:51 crc kubenswrapper[4833]: I0217 13:58:51.235102 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 17 13:58:51 crc kubenswrapper[4833]: I0217 13:58:51.235425 4833 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 17 13:58:51 crc kubenswrapper[4833]: I0217 13:58:51.237820 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-8wwr4"] Feb 17 13:58:51 crc kubenswrapper[4833]: I0217 13:58:51.314097 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-slf66"] Feb 17 13:58:51 crc kubenswrapper[4833]: I0217 13:58:51.314925 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-slf66" Feb 17 13:58:51 crc kubenswrapper[4833]: I0217 13:58:51.316724 4833 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 17 13:58:51 crc kubenswrapper[4833]: I0217 13:58:51.316973 4833 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-2mj9z" Feb 17 13:58:51 crc kubenswrapper[4833]: I0217 13:58:51.322063 4833 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 17 13:58:51 crc kubenswrapper[4833]: I0217 13:58:51.322999 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 17 13:58:51 crc kubenswrapper[4833]: I0217 13:58:51.332755 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-clwcr"] Feb 17 13:58:51 crc kubenswrapper[4833]: I0217 13:58:51.333908 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-clwcr" Feb 17 13:58:51 crc kubenswrapper[4833]: I0217 13:58:51.336891 4833 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 17 13:58:51 crc kubenswrapper[4833]: I0217 13:58:51.366276 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-clwcr"] Feb 17 13:58:51 crc kubenswrapper[4833]: I0217 13:58:51.413989 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4jlv\" (UniqueName: \"kubernetes.io/projected/0485a980-f2be-4c1b-83b4-3449e5794d2c-kube-api-access-w4jlv\") pod \"frr-k8s-ls8rs\" (UID: \"0485a980-f2be-4c1b-83b4-3449e5794d2c\") " pod="metallb-system/frr-k8s-ls8rs" Feb 17 13:58:51 crc kubenswrapper[4833]: I0217 13:58:51.414099 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/0485a980-f2be-4c1b-83b4-3449e5794d2c-frr-sockets\") pod \"frr-k8s-ls8rs\" (UID: \"0485a980-f2be-4c1b-83b4-3449e5794d2c\") " pod="metallb-system/frr-k8s-ls8rs" Feb 17 13:58:51 crc kubenswrapper[4833]: I0217 13:58:51.414128 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/0485a980-f2be-4c1b-83b4-3449e5794d2c-reloader\") pod \"frr-k8s-ls8rs\" (UID: \"0485a980-f2be-4c1b-83b4-3449e5794d2c\") " pod="metallb-system/frr-k8s-ls8rs" Feb 17 13:58:51 crc kubenswrapper[4833]: I0217 13:58:51.414176 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/034a2f99-1c32-4d53-8018-739d262fdc4c-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-8wwr4\" (UID: \"034a2f99-1c32-4d53-8018-739d262fdc4c\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-8wwr4" Feb 17 13:58:51 crc kubenswrapper[4833]: I0217 13:58:51.414194 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqm8q\" (UniqueName: \"kubernetes.io/projected/2b33d487-2e39-4f67-b34c-afc3b7cca769-kube-api-access-sqm8q\") pod \"speaker-slf66\" (UID: \"2b33d487-2e39-4f67-b34c-afc3b7cca769\") " pod="metallb-system/speaker-slf66" Feb 17 13:58:51 crc kubenswrapper[4833]: I0217 13:58:51.414213 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/0485a980-f2be-4c1b-83b4-3449e5794d2c-metrics\") pod \"frr-k8s-ls8rs\" (UID: \"0485a980-f2be-4c1b-83b4-3449e5794d2c\") " pod="metallb-system/frr-k8s-ls8rs" Feb 17 13:58:51 crc kubenswrapper[4833]: I0217 13:58:51.414259 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2b33d487-2e39-4f67-b34c-afc3b7cca769-memberlist\") pod \"speaker-slf66\" (UID: \"2b33d487-2e39-4f67-b34c-afc3b7cca769\") " pod="metallb-system/speaker-slf66" Feb 17 13:58:51 crc kubenswrapper[4833]: I0217 13:58:51.414378 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/0485a980-f2be-4c1b-83b4-3449e5794d2c-frr-conf\") pod \"frr-k8s-ls8rs\" (UID: \"0485a980-f2be-4c1b-83b4-3449e5794d2c\") " pod="metallb-system/frr-k8s-ls8rs" Feb 17 13:58:51 crc kubenswrapper[4833]: I0217 13:58:51.414424 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/0485a980-f2be-4c1b-83b4-3449e5794d2c-frr-startup\") pod \"frr-k8s-ls8rs\" (UID: \"0485a980-f2be-4c1b-83b4-3449e5794d2c\") " pod="metallb-system/frr-k8s-ls8rs" Feb 17 13:58:51 crc kubenswrapper[4833]: I0217 13:58:51.414502 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cff2n\" (UniqueName: \"kubernetes.io/projected/034a2f99-1c32-4d53-8018-739d262fdc4c-kube-api-access-cff2n\") pod \"frr-k8s-webhook-server-78b44bf5bb-8wwr4\" (UID: \"034a2f99-1c32-4d53-8018-739d262fdc4c\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-8wwr4" Feb 17 13:58:51 crc kubenswrapper[4833]: I0217 13:58:51.414552 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/2b33d487-2e39-4f67-b34c-afc3b7cca769-metallb-excludel2\") pod \"speaker-slf66\" (UID: \"2b33d487-2e39-4f67-b34c-afc3b7cca769\") " pod="metallb-system/speaker-slf66" Feb 17 13:58:51 crc kubenswrapper[4833]: I0217 13:58:51.414613 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0485a980-f2be-4c1b-83b4-3449e5794d2c-metrics-certs\") pod \"frr-k8s-ls8rs\" (UID: \"0485a980-f2be-4c1b-83b4-3449e5794d2c\") " pod="metallb-system/frr-k8s-ls8rs" Feb 17 13:58:51 crc kubenswrapper[4833]: I0217 13:58:51.414670 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2b33d487-2e39-4f67-b34c-afc3b7cca769-metrics-certs\") pod \"speaker-slf66\" (UID: \"2b33d487-2e39-4f67-b34c-afc3b7cca769\") " pod="metallb-system/speaker-slf66" Feb 17 13:58:51 crc kubenswrapper[4833]: I0217 13:58:51.516498 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2b33d487-2e39-4f67-b34c-afc3b7cca769-metrics-certs\") pod \"speaker-slf66\" (UID: \"2b33d487-2e39-4f67-b34c-afc3b7cca769\") " pod="metallb-system/speaker-slf66" Feb 17 13:58:51 crc kubenswrapper[4833]: I0217 13:58:51.516576 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4jlv\" (UniqueName: \"kubernetes.io/projected/0485a980-f2be-4c1b-83b4-3449e5794d2c-kube-api-access-w4jlv\") pod \"frr-k8s-ls8rs\" (UID: \"0485a980-f2be-4c1b-83b4-3449e5794d2c\") " pod="metallb-system/frr-k8s-ls8rs" Feb 17 13:58:51 crc kubenswrapper[4833]: I0217 13:58:51.516608 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/0485a980-f2be-4c1b-83b4-3449e5794d2c-frr-sockets\") pod \"frr-k8s-ls8rs\" (UID: \"0485a980-f2be-4c1b-83b4-3449e5794d2c\") " pod="metallb-system/frr-k8s-ls8rs" Feb 17 13:58:51 crc kubenswrapper[4833]: I0217 13:58:51.516646 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8e4e92de-aa2d-4d30-82b2-703a3cf2d7b6-cert\") pod \"controller-69bbfbf88f-clwcr\" (UID: \"8e4e92de-aa2d-4d30-82b2-703a3cf2d7b6\") " pod="metallb-system/controller-69bbfbf88f-clwcr" Feb 17 13:58:51 crc kubenswrapper[4833]: I0217 13:58:51.516672 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/0485a980-f2be-4c1b-83b4-3449e5794d2c-reloader\") pod \"frr-k8s-ls8rs\" (UID: \"0485a980-f2be-4c1b-83b4-3449e5794d2c\") " pod="metallb-system/frr-k8s-ls8rs" Feb 17 13:58:51 crc kubenswrapper[4833]: I0217 13:58:51.516692 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/034a2f99-1c32-4d53-8018-739d262fdc4c-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-8wwr4\" (UID: \"034a2f99-1c32-4d53-8018-739d262fdc4c\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-8wwr4" Feb 17 13:58:51 crc kubenswrapper[4833]: I0217 13:58:51.516712 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqm8q\" (UniqueName: \"kubernetes.io/projected/2b33d487-2e39-4f67-b34c-afc3b7cca769-kube-api-access-sqm8q\") pod \"speaker-slf66\" (UID: \"2b33d487-2e39-4f67-b34c-afc3b7cca769\") " pod="metallb-system/speaker-slf66" Feb 17 13:58:51 crc kubenswrapper[4833]: I0217 13:58:51.516740 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/0485a980-f2be-4c1b-83b4-3449e5794d2c-metrics\") pod \"frr-k8s-ls8rs\" (UID: \"0485a980-f2be-4c1b-83b4-3449e5794d2c\") " pod="metallb-system/frr-k8s-ls8rs" Feb 17 13:58:51 crc kubenswrapper[4833]: I0217 13:58:51.516764 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8e4e92de-aa2d-4d30-82b2-703a3cf2d7b6-metrics-certs\") pod \"controller-69bbfbf88f-clwcr\" (UID: \"8e4e92de-aa2d-4d30-82b2-703a3cf2d7b6\") " pod="metallb-system/controller-69bbfbf88f-clwcr" Feb 17 13:58:51 crc kubenswrapper[4833]: I0217 13:58:51.516786 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2b33d487-2e39-4f67-b34c-afc3b7cca769-memberlist\") pod \"speaker-slf66\" (UID: \"2b33d487-2e39-4f67-b34c-afc3b7cca769\") " pod="metallb-system/speaker-slf66" Feb 17 13:58:51 crc kubenswrapper[4833]: I0217 13:58:51.516807 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9pn4\" (UniqueName: \"kubernetes.io/projected/8e4e92de-aa2d-4d30-82b2-703a3cf2d7b6-kube-api-access-v9pn4\") pod \"controller-69bbfbf88f-clwcr\" (UID: \"8e4e92de-aa2d-4d30-82b2-703a3cf2d7b6\") " pod="metallb-system/controller-69bbfbf88f-clwcr" Feb 17 13:58:51 crc kubenswrapper[4833]: I0217 13:58:51.516832 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/0485a980-f2be-4c1b-83b4-3449e5794d2c-frr-conf\") pod \"frr-k8s-ls8rs\" (UID: \"0485a980-f2be-4c1b-83b4-3449e5794d2c\") " pod="metallb-system/frr-k8s-ls8rs" Feb 17 13:58:51 crc kubenswrapper[4833]: I0217 13:58:51.516853 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/0485a980-f2be-4c1b-83b4-3449e5794d2c-frr-startup\") pod \"frr-k8s-ls8rs\" (UID: \"0485a980-f2be-4c1b-83b4-3449e5794d2c\") " pod="metallb-system/frr-k8s-ls8rs" Feb 17 13:58:51 crc kubenswrapper[4833]: I0217 13:58:51.516882 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cff2n\" (UniqueName: \"kubernetes.io/projected/034a2f99-1c32-4d53-8018-739d262fdc4c-kube-api-access-cff2n\") pod \"frr-k8s-webhook-server-78b44bf5bb-8wwr4\" (UID: \"034a2f99-1c32-4d53-8018-739d262fdc4c\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-8wwr4" Feb 17 13:58:51 crc kubenswrapper[4833]: I0217 13:58:51.517009 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/2b33d487-2e39-4f67-b34c-afc3b7cca769-metallb-excludel2\") pod \"speaker-slf66\" (UID: \"2b33d487-2e39-4f67-b34c-afc3b7cca769\") " pod="metallb-system/speaker-slf66" Feb 17 13:58:51 crc kubenswrapper[4833]: I0217 13:58:51.517178 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0485a980-f2be-4c1b-83b4-3449e5794d2c-metrics-certs\") pod \"frr-k8s-ls8rs\" (UID: \"0485a980-f2be-4c1b-83b4-3449e5794d2c\") " pod="metallb-system/frr-k8s-ls8rs" Feb 17 13:58:51 crc kubenswrapper[4833]: I0217 13:58:51.517200 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/0485a980-f2be-4c1b-83b4-3449e5794d2c-frr-sockets\") pod \"frr-k8s-ls8rs\" (UID: \"0485a980-f2be-4c1b-83b4-3449e5794d2c\") " pod="metallb-system/frr-k8s-ls8rs" Feb 17 13:58:51 crc kubenswrapper[4833]: I0217 13:58:51.517625 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/0485a980-f2be-4c1b-83b4-3449e5794d2c-reloader\") pod \"frr-k8s-ls8rs\" (UID: \"0485a980-f2be-4c1b-83b4-3449e5794d2c\") " pod="metallb-system/frr-k8s-ls8rs" Feb 17 13:58:51 crc kubenswrapper[4833]: I0217 13:58:51.517878 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/0485a980-f2be-4c1b-83b4-3449e5794d2c-metrics\") pod \"frr-k8s-ls8rs\" (UID: \"0485a980-f2be-4c1b-83b4-3449e5794d2c\") " pod="metallb-system/frr-k8s-ls8rs" Feb 17 13:58:51 crc kubenswrapper[4833]: I0217 13:58:51.518359 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/0485a980-f2be-4c1b-83b4-3449e5794d2c-frr-startup\") pod \"frr-k8s-ls8rs\" (UID: \"0485a980-f2be-4c1b-83b4-3449e5794d2c\") " pod="metallb-system/frr-k8s-ls8rs" Feb 17 13:58:51 crc kubenswrapper[4833]: E0217 13:58:51.518442 4833 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 17 13:58:51 crc kubenswrapper[4833]: E0217 13:58:51.518492 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b33d487-2e39-4f67-b34c-afc3b7cca769-memberlist podName:2b33d487-2e39-4f67-b34c-afc3b7cca769 nodeName:}" failed. No retries permitted until 2026-02-17 13:58:52.018474981 +0000 UTC m=+821.653574424 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/2b33d487-2e39-4f67-b34c-afc3b7cca769-memberlist") pod "speaker-slf66" (UID: "2b33d487-2e39-4f67-b34c-afc3b7cca769") : secret "metallb-memberlist" not found Feb 17 13:58:51 crc kubenswrapper[4833]: I0217 13:58:51.518822 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/2b33d487-2e39-4f67-b34c-afc3b7cca769-metallb-excludel2\") pod \"speaker-slf66\" (UID: \"2b33d487-2e39-4f67-b34c-afc3b7cca769\") " pod="metallb-system/speaker-slf66" Feb 17 13:58:51 crc kubenswrapper[4833]: I0217 13:58:51.518927 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/0485a980-f2be-4c1b-83b4-3449e5794d2c-frr-conf\") pod \"frr-k8s-ls8rs\" (UID: \"0485a980-f2be-4c1b-83b4-3449e5794d2c\") " pod="metallb-system/frr-k8s-ls8rs" Feb 17 13:58:51 crc kubenswrapper[4833]: I0217 13:58:51.523410 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0485a980-f2be-4c1b-83b4-3449e5794d2c-metrics-certs\") pod \"frr-k8s-ls8rs\" (UID: \"0485a980-f2be-4c1b-83b4-3449e5794d2c\") " pod="metallb-system/frr-k8s-ls8rs" Feb 17 13:58:51 crc kubenswrapper[4833]: I0217 13:58:51.523796 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2b33d487-2e39-4f67-b34c-afc3b7cca769-metrics-certs\") pod \"speaker-slf66\" (UID: \"2b33d487-2e39-4f67-b34c-afc3b7cca769\") " pod="metallb-system/speaker-slf66" Feb 17 13:58:51 crc kubenswrapper[4833]: I0217 13:58:51.528869 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/034a2f99-1c32-4d53-8018-739d262fdc4c-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-8wwr4\" (UID: \"034a2f99-1c32-4d53-8018-739d262fdc4c\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-8wwr4" Feb 17 13:58:51 crc kubenswrapper[4833]: I0217 13:58:51.546436 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cff2n\" (UniqueName: \"kubernetes.io/projected/034a2f99-1c32-4d53-8018-739d262fdc4c-kube-api-access-cff2n\") pod \"frr-k8s-webhook-server-78b44bf5bb-8wwr4\" (UID: \"034a2f99-1c32-4d53-8018-739d262fdc4c\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-8wwr4" Feb 17 13:58:51 crc kubenswrapper[4833]: I0217 13:58:51.546918 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4jlv\" (UniqueName: \"kubernetes.io/projected/0485a980-f2be-4c1b-83b4-3449e5794d2c-kube-api-access-w4jlv\") pod \"frr-k8s-ls8rs\" (UID: \"0485a980-f2be-4c1b-83b4-3449e5794d2c\") " pod="metallb-system/frr-k8s-ls8rs" Feb 17 13:58:51 crc kubenswrapper[4833]: I0217 13:58:51.550765 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqm8q\" (UniqueName: \"kubernetes.io/projected/2b33d487-2e39-4f67-b34c-afc3b7cca769-kube-api-access-sqm8q\") pod \"speaker-slf66\" (UID: \"2b33d487-2e39-4f67-b34c-afc3b7cca769\") " pod="metallb-system/speaker-slf66" Feb 17 13:58:51 crc kubenswrapper[4833]: I0217 13:58:51.552386 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-8wwr4" Feb 17 13:58:51 crc kubenswrapper[4833]: I0217 13:58:51.561725 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-ls8rs" Feb 17 13:58:51 crc kubenswrapper[4833]: I0217 13:58:51.619086 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8e4e92de-aa2d-4d30-82b2-703a3cf2d7b6-cert\") pod \"controller-69bbfbf88f-clwcr\" (UID: \"8e4e92de-aa2d-4d30-82b2-703a3cf2d7b6\") " pod="metallb-system/controller-69bbfbf88f-clwcr" Feb 17 13:58:51 crc kubenswrapper[4833]: I0217 13:58:51.619141 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8e4e92de-aa2d-4d30-82b2-703a3cf2d7b6-metrics-certs\") pod \"controller-69bbfbf88f-clwcr\" (UID: \"8e4e92de-aa2d-4d30-82b2-703a3cf2d7b6\") " pod="metallb-system/controller-69bbfbf88f-clwcr" Feb 17 13:58:51 crc kubenswrapper[4833]: I0217 13:58:51.619174 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9pn4\" (UniqueName: \"kubernetes.io/projected/8e4e92de-aa2d-4d30-82b2-703a3cf2d7b6-kube-api-access-v9pn4\") pod \"controller-69bbfbf88f-clwcr\" (UID: \"8e4e92de-aa2d-4d30-82b2-703a3cf2d7b6\") " pod="metallb-system/controller-69bbfbf88f-clwcr" Feb 17 13:58:51 crc kubenswrapper[4833]: I0217 13:58:51.622679 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8e4e92de-aa2d-4d30-82b2-703a3cf2d7b6-metrics-certs\") pod \"controller-69bbfbf88f-clwcr\" (UID: \"8e4e92de-aa2d-4d30-82b2-703a3cf2d7b6\") " pod="metallb-system/controller-69bbfbf88f-clwcr" Feb 17 13:58:51 crc kubenswrapper[4833]: I0217 13:58:51.624168 4833 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 17 13:58:51 crc kubenswrapper[4833]: I0217 13:58:51.633063 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8e4e92de-aa2d-4d30-82b2-703a3cf2d7b6-cert\") pod \"controller-69bbfbf88f-clwcr\" (UID: \"8e4e92de-aa2d-4d30-82b2-703a3cf2d7b6\") " pod="metallb-system/controller-69bbfbf88f-clwcr" Feb 17 13:58:51 crc kubenswrapper[4833]: I0217 13:58:51.639312 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9pn4\" (UniqueName: \"kubernetes.io/projected/8e4e92de-aa2d-4d30-82b2-703a3cf2d7b6-kube-api-access-v9pn4\") pod \"controller-69bbfbf88f-clwcr\" (UID: \"8e4e92de-aa2d-4d30-82b2-703a3cf2d7b6\") " pod="metallb-system/controller-69bbfbf88f-clwcr" Feb 17 13:58:51 crc kubenswrapper[4833]: I0217 13:58:51.662506 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-clwcr" Feb 17 13:58:51 crc kubenswrapper[4833]: I0217 13:58:51.977457 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-8wwr4"] Feb 17 13:58:51 crc kubenswrapper[4833]: W0217 13:58:51.983839 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod034a2f99_1c32_4d53_8018_739d262fdc4c.slice/crio-ce960f02f58374851732ca6fbfcf8f3f4948d6271298bcdc559df0eb1f0f91f7 WatchSource:0}: Error finding container ce960f02f58374851732ca6fbfcf8f3f4948d6271298bcdc559df0eb1f0f91f7: Status 404 returned error can't find the container with id ce960f02f58374851732ca6fbfcf8f3f4948d6271298bcdc559df0eb1f0f91f7 Feb 17 13:58:52 crc kubenswrapper[4833]: I0217 13:58:52.028930 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2b33d487-2e39-4f67-b34c-afc3b7cca769-memberlist\") pod \"speaker-slf66\" (UID: \"2b33d487-2e39-4f67-b34c-afc3b7cca769\") " pod="metallb-system/speaker-slf66" Feb 17 13:58:52 crc kubenswrapper[4833]: E0217 13:58:52.029120 4833 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 17 13:58:52 crc kubenswrapper[4833]: E0217 13:58:52.029199 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b33d487-2e39-4f67-b34c-afc3b7cca769-memberlist podName:2b33d487-2e39-4f67-b34c-afc3b7cca769 nodeName:}" failed. No retries permitted until 2026-02-17 13:58:53.029176399 +0000 UTC m=+822.664275832 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/2b33d487-2e39-4f67-b34c-afc3b7cca769-memberlist") pod "speaker-slf66" (UID: "2b33d487-2e39-4f67-b34c-afc3b7cca769") : secret "metallb-memberlist" not found Feb 17 13:58:52 crc kubenswrapper[4833]: I0217 13:58:52.111598 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-clwcr"] Feb 17 13:58:52 crc kubenswrapper[4833]: W0217 13:58:52.112351 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e4e92de_aa2d_4d30_82b2_703a3cf2d7b6.slice/crio-c27b00cf330507ff0a4009fe2fec7beb9b384526476cb8dae1b1ca12becf54b3 WatchSource:0}: Error finding container c27b00cf330507ff0a4009fe2fec7beb9b384526476cb8dae1b1ca12becf54b3: Status 404 returned error can't find the container with id c27b00cf330507ff0a4009fe2fec7beb9b384526476cb8dae1b1ca12becf54b3 Feb 17 13:58:52 crc kubenswrapper[4833]: I0217 13:58:52.532362 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ls8rs" event={"ID":"0485a980-f2be-4c1b-83b4-3449e5794d2c","Type":"ContainerStarted","Data":"3d3cc75afb9c8d64733565e7a3f61ef77948f96562c9d41129e8b74ed7d58a7f"} Feb 17 13:58:52 crc kubenswrapper[4833]: I0217 13:58:52.534742 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-clwcr" event={"ID":"8e4e92de-aa2d-4d30-82b2-703a3cf2d7b6","Type":"ContainerStarted","Data":"d5f2b363ae41a2fb5718a20049649c74da5d4d45f08ffd54acd8cd2aa787e803"} Feb 17 13:58:52 crc kubenswrapper[4833]: I0217 13:58:52.534800 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-clwcr" event={"ID":"8e4e92de-aa2d-4d30-82b2-703a3cf2d7b6","Type":"ContainerStarted","Data":"81a8ad404214e57aab46d317383875f79e08f7d183aa905a75eea1b8912c6470"} Feb 17 13:58:52 crc kubenswrapper[4833]: I0217 13:58:52.534818 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-clwcr" event={"ID":"8e4e92de-aa2d-4d30-82b2-703a3cf2d7b6","Type":"ContainerStarted","Data":"c27b00cf330507ff0a4009fe2fec7beb9b384526476cb8dae1b1ca12becf54b3"} Feb 17 13:58:52 crc kubenswrapper[4833]: I0217 13:58:52.534870 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-clwcr" Feb 17 13:58:52 crc kubenswrapper[4833]: I0217 13:58:52.535745 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-8wwr4" event={"ID":"034a2f99-1c32-4d53-8018-739d262fdc4c","Type":"ContainerStarted","Data":"ce960f02f58374851732ca6fbfcf8f3f4948d6271298bcdc559df0eb1f0f91f7"} Feb 17 13:58:52 crc kubenswrapper[4833]: I0217 13:58:52.557968 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-clwcr" podStartSLOduration=1.5579399569999999 podStartE2EDuration="1.557939957s" podCreationTimestamp="2026-02-17 13:58:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:58:52.557387152 +0000 UTC m=+822.192486605" watchObservedRunningTime="2026-02-17 13:58:52.557939957 +0000 UTC m=+822.193039390" Feb 17 13:58:53 crc kubenswrapper[4833]: I0217 13:58:53.047463 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2b33d487-2e39-4f67-b34c-afc3b7cca769-memberlist\") pod \"speaker-slf66\" (UID: \"2b33d487-2e39-4f67-b34c-afc3b7cca769\") " pod="metallb-system/speaker-slf66" Feb 17 13:58:53 crc kubenswrapper[4833]: I0217 13:58:53.053395 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2b33d487-2e39-4f67-b34c-afc3b7cca769-memberlist\") pod \"speaker-slf66\" (UID: \"2b33d487-2e39-4f67-b34c-afc3b7cca769\") " pod="metallb-system/speaker-slf66" Feb 17 13:58:53 crc kubenswrapper[4833]: I0217 13:58:53.130863 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-slf66" Feb 17 13:58:53 crc kubenswrapper[4833]: W0217 13:58:53.150695 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b33d487_2e39_4f67_b34c_afc3b7cca769.slice/crio-b65724f318da7df4a1b7f5de48907cf8039fc45f74336d0fc75b0460b2eeca13 WatchSource:0}: Error finding container b65724f318da7df4a1b7f5de48907cf8039fc45f74336d0fc75b0460b2eeca13: Status 404 returned error can't find the container with id b65724f318da7df4a1b7f5de48907cf8039fc45f74336d0fc75b0460b2eeca13 Feb 17 13:58:53 crc kubenswrapper[4833]: I0217 13:58:53.556246 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-slf66" event={"ID":"2b33d487-2e39-4f67-b34c-afc3b7cca769","Type":"ContainerStarted","Data":"b1b03c6b0a4bd7b0154877eb4673b4400720b2c3d7f7ba7673ff011bfb8ac179"} Feb 17 13:58:53 crc kubenswrapper[4833]: I0217 13:58:53.556287 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-slf66" event={"ID":"2b33d487-2e39-4f67-b34c-afc3b7cca769","Type":"ContainerStarted","Data":"b65724f318da7df4a1b7f5de48907cf8039fc45f74336d0fc75b0460b2eeca13"} Feb 17 13:58:54 crc kubenswrapper[4833]: I0217 13:58:54.565854 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-slf66" event={"ID":"2b33d487-2e39-4f67-b34c-afc3b7cca769","Type":"ContainerStarted","Data":"7b9b9d06b6321542d6b13f081646c4be748c7b1dcea34529ec4f04798e810275"} Feb 17 13:58:54 crc kubenswrapper[4833]: I0217 13:58:54.566096 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-slf66" Feb 17 13:58:54 crc kubenswrapper[4833]: I0217 13:58:54.586924 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-slf66" podStartSLOduration=3.586909164 podStartE2EDuration="3.586909164s" podCreationTimestamp="2026-02-17 13:58:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:58:54.585426002 +0000 UTC m=+824.220525455" watchObservedRunningTime="2026-02-17 13:58:54.586909164 +0000 UTC m=+824.222008597" Feb 17 13:59:00 crc kubenswrapper[4833]: E0217 13:59:00.738658 4833 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0485a980_f2be_4c1b_83b4_3449e5794d2c.slice/crio-conmon-12ac9605c0ca1cb61d8a655cbb6e28b31b902816ff273bafc14cb5331fdf0154.scope\": RecentStats: unable to find data in memory cache]" Feb 17 13:59:01 crc kubenswrapper[4833]: I0217 13:59:01.616084 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-8wwr4" event={"ID":"034a2f99-1c32-4d53-8018-739d262fdc4c","Type":"ContainerStarted","Data":"1d1b72ce3babeff8d40f88ba6ba4b20c237bf4c833823ecf557b2001c4553f20"} Feb 17 13:59:01 crc kubenswrapper[4833]: I0217 13:59:01.616478 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-8wwr4" Feb 17 13:59:01 crc kubenswrapper[4833]: I0217 13:59:01.619412 4833 generic.go:334] "Generic (PLEG): container finished" podID="0485a980-f2be-4c1b-83b4-3449e5794d2c" containerID="12ac9605c0ca1cb61d8a655cbb6e28b31b902816ff273bafc14cb5331fdf0154" exitCode=0 Feb 17 13:59:01 crc kubenswrapper[4833]: I0217 13:59:01.619478 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ls8rs" event={"ID":"0485a980-f2be-4c1b-83b4-3449e5794d2c","Type":"ContainerDied","Data":"12ac9605c0ca1cb61d8a655cbb6e28b31b902816ff273bafc14cb5331fdf0154"} Feb 17 13:59:01 crc kubenswrapper[4833]: I0217 13:59:01.635704 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-8wwr4" podStartSLOduration=2.149821802 podStartE2EDuration="10.635685833s" podCreationTimestamp="2026-02-17 13:58:51 +0000 UTC" firstStartedPulling="2026-02-17 13:58:51.985601935 +0000 UTC m=+821.620701368" lastFinishedPulling="2026-02-17 13:59:00.471465966 +0000 UTC m=+830.106565399" observedRunningTime="2026-02-17 13:59:01.631110363 +0000 UTC m=+831.266209796" watchObservedRunningTime="2026-02-17 13:59:01.635685833 +0000 UTC m=+831.270785266" Feb 17 13:59:02 crc kubenswrapper[4833]: I0217 13:59:02.627123 4833 generic.go:334] "Generic (PLEG): container finished" podID="0485a980-f2be-4c1b-83b4-3449e5794d2c" containerID="cb04c42aaed572adea5224e77e979c9a580c84a98ff2be8b6c420ae1bb54a9e9" exitCode=0 Feb 17 13:59:02 crc kubenswrapper[4833]: I0217 13:59:02.627201 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ls8rs" event={"ID":"0485a980-f2be-4c1b-83b4-3449e5794d2c","Type":"ContainerDied","Data":"cb04c42aaed572adea5224e77e979c9a580c84a98ff2be8b6c420ae1bb54a9e9"} Feb 17 13:59:03 crc kubenswrapper[4833]: I0217 13:59:03.136566 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-slf66" Feb 17 13:59:03 crc kubenswrapper[4833]: I0217 13:59:03.639332 4833 generic.go:334] "Generic (PLEG): container finished" podID="0485a980-f2be-4c1b-83b4-3449e5794d2c" containerID="2f23aff0f35b4b9a3b67e8c95de646543845cb2b217a1f2535fcc7b17c54f5d7" exitCode=0 Feb 17 13:59:03 crc kubenswrapper[4833]: I0217 13:59:03.639401 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ls8rs" event={"ID":"0485a980-f2be-4c1b-83b4-3449e5794d2c","Type":"ContainerDied","Data":"2f23aff0f35b4b9a3b67e8c95de646543845cb2b217a1f2535fcc7b17c54f5d7"} Feb 17 13:59:04 crc kubenswrapper[4833]: I0217 13:59:04.659134 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ls8rs" event={"ID":"0485a980-f2be-4c1b-83b4-3449e5794d2c","Type":"ContainerStarted","Data":"3f878e6ff095a5b7ea2b0668791aa96a4d71d0c340c0d1024ca2c07a40b91642"} Feb 17 13:59:04 crc kubenswrapper[4833]: I0217 13:59:04.659587 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ls8rs" event={"ID":"0485a980-f2be-4c1b-83b4-3449e5794d2c","Type":"ContainerStarted","Data":"10415cb1736db737dc05cf4f226aa0eaa23e1e9aab15103ac428d292db3c0db9"} Feb 17 13:59:04 crc kubenswrapper[4833]: I0217 13:59:04.659635 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ls8rs" event={"ID":"0485a980-f2be-4c1b-83b4-3449e5794d2c","Type":"ContainerStarted","Data":"a2343e186191bd93b62ce75700eafac3e26203b150a54b7319691cfb8d57fa73"} Feb 17 13:59:04 crc kubenswrapper[4833]: I0217 13:59:04.659655 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ls8rs" event={"ID":"0485a980-f2be-4c1b-83b4-3449e5794d2c","Type":"ContainerStarted","Data":"1bdb5dd4952c50356b83142e33adc040d06af9e3384f8014ecddd5336c09cbc6"} Feb 17 13:59:04 crc kubenswrapper[4833]: I0217 13:59:04.659673 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ls8rs" event={"ID":"0485a980-f2be-4c1b-83b4-3449e5794d2c","Type":"ContainerStarted","Data":"3a05767dada8f98f5d57b101d31c485e5650835e6aeea646d4b9352cbc822418"} Feb 17 13:59:04 crc kubenswrapper[4833]: I0217 13:59:04.688260 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56skhf"] Feb 17 13:59:04 crc kubenswrapper[4833]: I0217 13:59:04.689676 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56skhf" Feb 17 13:59:04 crc kubenswrapper[4833]: I0217 13:59:04.694584 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 17 13:59:04 crc kubenswrapper[4833]: I0217 13:59:04.704396 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56skhf"] Feb 17 13:59:04 crc kubenswrapper[4833]: I0217 13:59:04.732909 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r88tl\" (UniqueName: \"kubernetes.io/projected/29c10323-62d4-4656-a511-c3ec32027993-kube-api-access-r88tl\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56skhf\" (UID: \"29c10323-62d4-4656-a511-c3ec32027993\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56skhf" Feb 17 13:59:04 crc kubenswrapper[4833]: I0217 13:59:04.733005 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/29c10323-62d4-4656-a511-c3ec32027993-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56skhf\" (UID: \"29c10323-62d4-4656-a511-c3ec32027993\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56skhf" Feb 17 13:59:04 crc kubenswrapper[4833]: I0217 13:59:04.733098 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/29c10323-62d4-4656-a511-c3ec32027993-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56skhf\" (UID: \"29c10323-62d4-4656-a511-c3ec32027993\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56skhf" Feb 17 13:59:04 crc kubenswrapper[4833]: I0217 13:59:04.834744 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r88tl\" (UniqueName: \"kubernetes.io/projected/29c10323-62d4-4656-a511-c3ec32027993-kube-api-access-r88tl\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56skhf\" (UID: \"29c10323-62d4-4656-a511-c3ec32027993\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56skhf" Feb 17 13:59:04 crc kubenswrapper[4833]: I0217 13:59:04.835120 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/29c10323-62d4-4656-a511-c3ec32027993-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56skhf\" (UID: \"29c10323-62d4-4656-a511-c3ec32027993\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56skhf" Feb 17 13:59:04 crc kubenswrapper[4833]: I0217 13:59:04.835223 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/29c10323-62d4-4656-a511-c3ec32027993-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56skhf\" (UID: \"29c10323-62d4-4656-a511-c3ec32027993\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56skhf" Feb 17 13:59:04 crc kubenswrapper[4833]: I0217 13:59:04.835595 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/29c10323-62d4-4656-a511-c3ec32027993-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56skhf\" (UID: \"29c10323-62d4-4656-a511-c3ec32027993\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56skhf" Feb 17 13:59:04 crc kubenswrapper[4833]: I0217 13:59:04.835723 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/29c10323-62d4-4656-a511-c3ec32027993-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56skhf\" (UID: \"29c10323-62d4-4656-a511-c3ec32027993\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56skhf" Feb 17 13:59:04 crc kubenswrapper[4833]: I0217 13:59:04.853527 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r88tl\" (UniqueName: \"kubernetes.io/projected/29c10323-62d4-4656-a511-c3ec32027993-kube-api-access-r88tl\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56skhf\" (UID: \"29c10323-62d4-4656-a511-c3ec32027993\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56skhf" Feb 17 13:59:05 crc kubenswrapper[4833]: I0217 13:59:05.043609 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56skhf" Feb 17 13:59:05 crc kubenswrapper[4833]: I0217 13:59:05.427843 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56skhf"] Feb 17 13:59:05 crc kubenswrapper[4833]: I0217 13:59:05.667464 4833 generic.go:334] "Generic (PLEG): container finished" podID="29c10323-62d4-4656-a511-c3ec32027993" containerID="7e3a22af9724b8828fb2e661cbce3bd051ae26bd8725483a4e5f40df9848884f" exitCode=0 Feb 17 13:59:05 crc kubenswrapper[4833]: I0217 13:59:05.669279 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56skhf" event={"ID":"29c10323-62d4-4656-a511-c3ec32027993","Type":"ContainerDied","Data":"7e3a22af9724b8828fb2e661cbce3bd051ae26bd8725483a4e5f40df9848884f"} Feb 17 13:59:05 crc kubenswrapper[4833]: I0217 13:59:05.669339 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56skhf" event={"ID":"29c10323-62d4-4656-a511-c3ec32027993","Type":"ContainerStarted","Data":"a311fb3e9681fcd3dd91cbbe67ddc44a1971af89bc7af4843fcf65b1f8bb0d1b"} Feb 17 13:59:05 crc kubenswrapper[4833]: I0217 13:59:05.679010 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ls8rs" event={"ID":"0485a980-f2be-4c1b-83b4-3449e5794d2c","Type":"ContainerStarted","Data":"2d6d4ca2ff2ee104a4c9d91803aa3efe66ed768ff1b61590d93caa6dcef0c59b"} Feb 17 13:59:05 crc kubenswrapper[4833]: I0217 13:59:05.679190 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-ls8rs" Feb 17 13:59:05 crc kubenswrapper[4833]: I0217 13:59:05.717008 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-ls8rs" podStartSLOduration=6.012789014 podStartE2EDuration="14.716985816s" podCreationTimestamp="2026-02-17 13:58:51 +0000 UTC" firstStartedPulling="2026-02-17 13:58:51.762677494 +0000 UTC m=+821.397776927" lastFinishedPulling="2026-02-17 13:59:00.466874296 +0000 UTC m=+830.101973729" observedRunningTime="2026-02-17 13:59:05.71502389 +0000 UTC m=+835.350123343" watchObservedRunningTime="2026-02-17 13:59:05.716985816 +0000 UTC m=+835.352085249" Feb 17 13:59:06 crc kubenswrapper[4833]: I0217 13:59:06.562498 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-ls8rs" Feb 17 13:59:06 crc kubenswrapper[4833]: I0217 13:59:06.624680 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-ls8rs" Feb 17 13:59:08 crc kubenswrapper[4833]: I0217 13:59:08.702748 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56skhf" event={"ID":"29c10323-62d4-4656-a511-c3ec32027993","Type":"ContainerStarted","Data":"f30788d712f0acb8690996aa452ca56655229eefe12f1efb305a22ebf3ac9f13"} Feb 17 13:59:09 crc kubenswrapper[4833]: I0217 13:59:09.712904 4833 generic.go:334] "Generic (PLEG): container finished" podID="29c10323-62d4-4656-a511-c3ec32027993" containerID="f30788d712f0acb8690996aa452ca56655229eefe12f1efb305a22ebf3ac9f13" exitCode=0 Feb 17 13:59:09 crc kubenswrapper[4833]: I0217 13:59:09.712965 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56skhf" event={"ID":"29c10323-62d4-4656-a511-c3ec32027993","Type":"ContainerDied","Data":"f30788d712f0acb8690996aa452ca56655229eefe12f1efb305a22ebf3ac9f13"} Feb 17 13:59:10 crc kubenswrapper[4833]: I0217 13:59:10.724003 4833 generic.go:334] "Generic (PLEG): container finished" podID="29c10323-62d4-4656-a511-c3ec32027993" containerID="c453f879dd7dc862bc5e3f7ddd6e0e6db83babc21ca8efd3a69ad04dd404ccf4" exitCode=0 Feb 17 13:59:10 crc kubenswrapper[4833]: I0217 13:59:10.724080 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56skhf" event={"ID":"29c10323-62d4-4656-a511-c3ec32027993","Type":"ContainerDied","Data":"c453f879dd7dc862bc5e3f7ddd6e0e6db83babc21ca8efd3a69ad04dd404ccf4"} Feb 17 13:59:11 crc kubenswrapper[4833]: I0217 13:59:11.556849 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-8wwr4" Feb 17 13:59:11 crc kubenswrapper[4833]: I0217 13:59:11.667670 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-clwcr" Feb 17 13:59:11 crc kubenswrapper[4833]: I0217 13:59:11.979419 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56skhf" Feb 17 13:59:12 crc kubenswrapper[4833]: I0217 13:59:12.044100 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/29c10323-62d4-4656-a511-c3ec32027993-bundle\") pod \"29c10323-62d4-4656-a511-c3ec32027993\" (UID: \"29c10323-62d4-4656-a511-c3ec32027993\") " Feb 17 13:59:12 crc kubenswrapper[4833]: I0217 13:59:12.044411 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/29c10323-62d4-4656-a511-c3ec32027993-util\") pod \"29c10323-62d4-4656-a511-c3ec32027993\" (UID: \"29c10323-62d4-4656-a511-c3ec32027993\") " Feb 17 13:59:12 crc kubenswrapper[4833]: I0217 13:59:12.044459 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r88tl\" (UniqueName: \"kubernetes.io/projected/29c10323-62d4-4656-a511-c3ec32027993-kube-api-access-r88tl\") pod \"29c10323-62d4-4656-a511-c3ec32027993\" (UID: \"29c10323-62d4-4656-a511-c3ec32027993\") " Feb 17 13:59:12 crc kubenswrapper[4833]: I0217 13:59:12.045181 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29c10323-62d4-4656-a511-c3ec32027993-bundle" (OuterVolumeSpecName: "bundle") pod "29c10323-62d4-4656-a511-c3ec32027993" (UID: "29c10323-62d4-4656-a511-c3ec32027993"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:59:12 crc kubenswrapper[4833]: I0217 13:59:12.050811 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29c10323-62d4-4656-a511-c3ec32027993-kube-api-access-r88tl" (OuterVolumeSpecName: "kube-api-access-r88tl") pod "29c10323-62d4-4656-a511-c3ec32027993" (UID: "29c10323-62d4-4656-a511-c3ec32027993"). InnerVolumeSpecName "kube-api-access-r88tl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:59:12 crc kubenswrapper[4833]: I0217 13:59:12.055420 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29c10323-62d4-4656-a511-c3ec32027993-util" (OuterVolumeSpecName: "util") pod "29c10323-62d4-4656-a511-c3ec32027993" (UID: "29c10323-62d4-4656-a511-c3ec32027993"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:59:12 crc kubenswrapper[4833]: I0217 13:59:12.145972 4833 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/29c10323-62d4-4656-a511-c3ec32027993-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 13:59:12 crc kubenswrapper[4833]: I0217 13:59:12.146001 4833 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/29c10323-62d4-4656-a511-c3ec32027993-util\") on node \"crc\" DevicePath \"\"" Feb 17 13:59:12 crc kubenswrapper[4833]: I0217 13:59:12.146011 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r88tl\" (UniqueName: \"kubernetes.io/projected/29c10323-62d4-4656-a511-c3ec32027993-kube-api-access-r88tl\") on node \"crc\" DevicePath \"\"" Feb 17 13:59:12 crc kubenswrapper[4833]: I0217 13:59:12.737469 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56skhf" event={"ID":"29c10323-62d4-4656-a511-c3ec32027993","Type":"ContainerDied","Data":"a311fb3e9681fcd3dd91cbbe67ddc44a1971af89bc7af4843fcf65b1f8bb0d1b"} Feb 17 13:59:12 crc kubenswrapper[4833]: I0217 13:59:12.737748 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a311fb3e9681fcd3dd91cbbe67ddc44a1971af89bc7af4843fcf65b1f8bb0d1b" Feb 17 13:59:12 crc kubenswrapper[4833]: I0217 13:59:12.737530 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56skhf" Feb 17 13:59:17 crc kubenswrapper[4833]: I0217 13:59:17.800961 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-w5zdw"] Feb 17 13:59:17 crc kubenswrapper[4833]: E0217 13:59:17.801576 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29c10323-62d4-4656-a511-c3ec32027993" containerName="pull" Feb 17 13:59:17 crc kubenswrapper[4833]: I0217 13:59:17.801588 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="29c10323-62d4-4656-a511-c3ec32027993" containerName="pull" Feb 17 13:59:17 crc kubenswrapper[4833]: E0217 13:59:17.801607 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29c10323-62d4-4656-a511-c3ec32027993" containerName="util" Feb 17 13:59:17 crc kubenswrapper[4833]: I0217 13:59:17.801613 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="29c10323-62d4-4656-a511-c3ec32027993" containerName="util" Feb 17 13:59:17 crc kubenswrapper[4833]: E0217 13:59:17.801636 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29c10323-62d4-4656-a511-c3ec32027993" containerName="extract" Feb 17 13:59:17 crc kubenswrapper[4833]: I0217 13:59:17.801642 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="29c10323-62d4-4656-a511-c3ec32027993" containerName="extract" Feb 17 13:59:17 crc kubenswrapper[4833]: I0217 13:59:17.801740 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="29c10323-62d4-4656-a511-c3ec32027993" containerName="extract" Feb 17 13:59:17 crc kubenswrapper[4833]: I0217 13:59:17.802143 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-w5zdw" Feb 17 13:59:17 crc kubenswrapper[4833]: I0217 13:59:17.804300 4833 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-hbcjs" Feb 17 13:59:17 crc kubenswrapper[4833]: I0217 13:59:17.804378 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Feb 17 13:59:17 crc kubenswrapper[4833]: I0217 13:59:17.804728 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Feb 17 13:59:17 crc kubenswrapper[4833]: I0217 13:59:17.826242 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-w5zdw"] Feb 17 13:59:17 crc kubenswrapper[4833]: I0217 13:59:17.919356 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6bb19f44-e6d4-41c1-bf9a-203f2a98d55f-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-w5zdw\" (UID: \"6bb19f44-e6d4-41c1-bf9a-203f2a98d55f\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-w5zdw" Feb 17 13:59:17 crc kubenswrapper[4833]: I0217 13:59:17.919432 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqtq4\" (UniqueName: \"kubernetes.io/projected/6bb19f44-e6d4-41c1-bf9a-203f2a98d55f-kube-api-access-gqtq4\") pod \"cert-manager-operator-controller-manager-66c8bdd694-w5zdw\" (UID: \"6bb19f44-e6d4-41c1-bf9a-203f2a98d55f\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-w5zdw" Feb 17 13:59:18 crc kubenswrapper[4833]: I0217 13:59:18.021339 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6bb19f44-e6d4-41c1-bf9a-203f2a98d55f-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-w5zdw\" (UID: \"6bb19f44-e6d4-41c1-bf9a-203f2a98d55f\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-w5zdw" Feb 17 13:59:18 crc kubenswrapper[4833]: I0217 13:59:18.021416 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqtq4\" (UniqueName: \"kubernetes.io/projected/6bb19f44-e6d4-41c1-bf9a-203f2a98d55f-kube-api-access-gqtq4\") pod \"cert-manager-operator-controller-manager-66c8bdd694-w5zdw\" (UID: \"6bb19f44-e6d4-41c1-bf9a-203f2a98d55f\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-w5zdw" Feb 17 13:59:18 crc kubenswrapper[4833]: I0217 13:59:18.021978 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6bb19f44-e6d4-41c1-bf9a-203f2a98d55f-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-w5zdw\" (UID: \"6bb19f44-e6d4-41c1-bf9a-203f2a98d55f\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-w5zdw" Feb 17 13:59:18 crc kubenswrapper[4833]: I0217 13:59:18.038344 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqtq4\" (UniqueName: \"kubernetes.io/projected/6bb19f44-e6d4-41c1-bf9a-203f2a98d55f-kube-api-access-gqtq4\") pod \"cert-manager-operator-controller-manager-66c8bdd694-w5zdw\" (UID: \"6bb19f44-e6d4-41c1-bf9a-203f2a98d55f\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-w5zdw" Feb 17 13:59:18 crc kubenswrapper[4833]: I0217 13:59:18.121671 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-w5zdw" Feb 17 13:59:18 crc kubenswrapper[4833]: I0217 13:59:18.658580 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-w5zdw"] Feb 17 13:59:18 crc kubenswrapper[4833]: W0217 13:59:18.664262 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6bb19f44_e6d4_41c1_bf9a_203f2a98d55f.slice/crio-fba99ec7fc5e1915b36e0e16dbbf73b61f4a62965e3e79cf128e223f4f4b2e27 WatchSource:0}: Error finding container fba99ec7fc5e1915b36e0e16dbbf73b61f4a62965e3e79cf128e223f4f4b2e27: Status 404 returned error can't find the container with id fba99ec7fc5e1915b36e0e16dbbf73b61f4a62965e3e79cf128e223f4f4b2e27 Feb 17 13:59:18 crc kubenswrapper[4833]: I0217 13:59:18.774256 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-w5zdw" event={"ID":"6bb19f44-e6d4-41c1-bf9a-203f2a98d55f","Type":"ContainerStarted","Data":"fba99ec7fc5e1915b36e0e16dbbf73b61f4a62965e3e79cf128e223f4f4b2e27"} Feb 17 13:59:21 crc kubenswrapper[4833]: I0217 13:59:21.564711 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-ls8rs" Feb 17 13:59:22 crc kubenswrapper[4833]: I0217 13:59:22.809297 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-w5zdw" event={"ID":"6bb19f44-e6d4-41c1-bf9a-203f2a98d55f","Type":"ContainerStarted","Data":"53985bd99fa05a48abbea53bc078466ec49ddbab379732cd819c441bcdbee919"} Feb 17 13:59:24 crc kubenswrapper[4833]: I0217 13:59:24.697203 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-w5zdw" podStartSLOduration=4.493146168 podStartE2EDuration="7.69718779s" podCreationTimestamp="2026-02-17 13:59:17 +0000 UTC" firstStartedPulling="2026-02-17 13:59:18.669075188 +0000 UTC m=+848.304174661" lastFinishedPulling="2026-02-17 13:59:21.87311684 +0000 UTC m=+851.508216283" observedRunningTime="2026-02-17 13:59:22.83544009 +0000 UTC m=+852.470539573" watchObservedRunningTime="2026-02-17 13:59:24.69718779 +0000 UTC m=+854.332287223" Feb 17 13:59:24 crc kubenswrapper[4833]: I0217 13:59:24.701389 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-lqxtp"] Feb 17 13:59:24 crc kubenswrapper[4833]: I0217 13:59:24.702107 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-lqxtp" Feb 17 13:59:24 crc kubenswrapper[4833]: I0217 13:59:24.713401 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 17 13:59:24 crc kubenswrapper[4833]: I0217 13:59:24.713727 4833 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-4rwsw" Feb 17 13:59:24 crc kubenswrapper[4833]: I0217 13:59:24.717493 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-lqxtp"] Feb 17 13:59:24 crc kubenswrapper[4833]: I0217 13:59:24.719299 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 17 13:59:24 crc kubenswrapper[4833]: I0217 13:59:24.724752 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5deb8fad-a9b9-4caa-be01-c9ab1b66f7cb-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-lqxtp\" (UID: \"5deb8fad-a9b9-4caa-be01-c9ab1b66f7cb\") " pod="cert-manager/cert-manager-webhook-6888856db4-lqxtp" Feb 17 13:59:24 crc kubenswrapper[4833]: I0217 13:59:24.724793 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25mg4\" (UniqueName: \"kubernetes.io/projected/5deb8fad-a9b9-4caa-be01-c9ab1b66f7cb-kube-api-access-25mg4\") pod \"cert-manager-webhook-6888856db4-lqxtp\" (UID: \"5deb8fad-a9b9-4caa-be01-c9ab1b66f7cb\") " pod="cert-manager/cert-manager-webhook-6888856db4-lqxtp" Feb 17 13:59:24 crc kubenswrapper[4833]: I0217 13:59:24.826766 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5deb8fad-a9b9-4caa-be01-c9ab1b66f7cb-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-lqxtp\" (UID: \"5deb8fad-a9b9-4caa-be01-c9ab1b66f7cb\") " pod="cert-manager/cert-manager-webhook-6888856db4-lqxtp" Feb 17 13:59:24 crc kubenswrapper[4833]: I0217 13:59:24.826841 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25mg4\" (UniqueName: \"kubernetes.io/projected/5deb8fad-a9b9-4caa-be01-c9ab1b66f7cb-kube-api-access-25mg4\") pod \"cert-manager-webhook-6888856db4-lqxtp\" (UID: \"5deb8fad-a9b9-4caa-be01-c9ab1b66f7cb\") " pod="cert-manager/cert-manager-webhook-6888856db4-lqxtp" Feb 17 13:59:24 crc kubenswrapper[4833]: I0217 13:59:24.872648 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25mg4\" (UniqueName: \"kubernetes.io/projected/5deb8fad-a9b9-4caa-be01-c9ab1b66f7cb-kube-api-access-25mg4\") pod \"cert-manager-webhook-6888856db4-lqxtp\" (UID: \"5deb8fad-a9b9-4caa-be01-c9ab1b66f7cb\") " pod="cert-manager/cert-manager-webhook-6888856db4-lqxtp" Feb 17 13:59:24 crc kubenswrapper[4833]: I0217 13:59:24.875393 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5deb8fad-a9b9-4caa-be01-c9ab1b66f7cb-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-lqxtp\" (UID: \"5deb8fad-a9b9-4caa-be01-c9ab1b66f7cb\") " pod="cert-manager/cert-manager-webhook-6888856db4-lqxtp" Feb 17 13:59:25 crc kubenswrapper[4833]: I0217 13:59:25.015948 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-lqxtp" Feb 17 13:59:25 crc kubenswrapper[4833]: I0217 13:59:25.462712 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-lqxtp"] Feb 17 13:59:25 crc kubenswrapper[4833]: W0217 13:59:25.468296 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5deb8fad_a9b9_4caa_be01_c9ab1b66f7cb.slice/crio-602e6a5953eb6124e664628071b1c17aeed1f23cce5215dacc49321be00e2b5c WatchSource:0}: Error finding container 602e6a5953eb6124e664628071b1c17aeed1f23cce5215dacc49321be00e2b5c: Status 404 returned error can't find the container with id 602e6a5953eb6124e664628071b1c17aeed1f23cce5215dacc49321be00e2b5c Feb 17 13:59:25 crc kubenswrapper[4833]: I0217 13:59:25.826759 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-lqxtp" event={"ID":"5deb8fad-a9b9-4caa-be01-c9ab1b66f7cb","Type":"ContainerStarted","Data":"602e6a5953eb6124e664628071b1c17aeed1f23cce5215dacc49321be00e2b5c"} Feb 17 13:59:30 crc kubenswrapper[4833]: I0217 13:59:30.856429 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-lqxtp" event={"ID":"5deb8fad-a9b9-4caa-be01-c9ab1b66f7cb","Type":"ContainerStarted","Data":"f8945fafefe2e976ac962fb3b71f77488709b27add0d9ac0a16abc99f28a960d"} Feb 17 13:59:30 crc kubenswrapper[4833]: I0217 13:59:30.857016 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-6888856db4-lqxtp" Feb 17 13:59:30 crc kubenswrapper[4833]: I0217 13:59:30.870501 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-6888856db4-lqxtp" podStartSLOduration=2.173116491 podStartE2EDuration="6.870483002s" podCreationTimestamp="2026-02-17 13:59:24 +0000 UTC" firstStartedPulling="2026-02-17 13:59:25.470565278 +0000 UTC m=+855.105664711" lastFinishedPulling="2026-02-17 13:59:30.167931789 +0000 UTC m=+859.803031222" observedRunningTime="2026-02-17 13:59:30.868547256 +0000 UTC m=+860.503646699" watchObservedRunningTime="2026-02-17 13:59:30.870483002 +0000 UTC m=+860.505582435" Feb 17 13:59:32 crc kubenswrapper[4833]: I0217 13:59:32.172768 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-xw652"] Feb 17 13:59:32 crc kubenswrapper[4833]: I0217 13:59:32.173697 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-xw652" Feb 17 13:59:32 crc kubenswrapper[4833]: I0217 13:59:32.175819 4833 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-b68k2" Feb 17 13:59:32 crc kubenswrapper[4833]: I0217 13:59:32.188997 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-xw652"] Feb 17 13:59:32 crc kubenswrapper[4833]: I0217 13:59:32.233522 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4c52a718-46bf-4c12-9775-4c90e0b60065-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-xw652\" (UID: \"4c52a718-46bf-4c12-9775-4c90e0b60065\") " pod="cert-manager/cert-manager-cainjector-5545bd876-xw652" Feb 17 13:59:32 crc kubenswrapper[4833]: I0217 13:59:32.233594 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skjhh\" (UniqueName: \"kubernetes.io/projected/4c52a718-46bf-4c12-9775-4c90e0b60065-kube-api-access-skjhh\") pod \"cert-manager-cainjector-5545bd876-xw652\" (UID: \"4c52a718-46bf-4c12-9775-4c90e0b60065\") " pod="cert-manager/cert-manager-cainjector-5545bd876-xw652" Feb 17 13:59:32 crc kubenswrapper[4833]: I0217 13:59:32.335174 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4c52a718-46bf-4c12-9775-4c90e0b60065-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-xw652\" (UID: \"4c52a718-46bf-4c12-9775-4c90e0b60065\") " pod="cert-manager/cert-manager-cainjector-5545bd876-xw652" Feb 17 13:59:32 crc kubenswrapper[4833]: I0217 13:59:32.335403 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skjhh\" (UniqueName: \"kubernetes.io/projected/4c52a718-46bf-4c12-9775-4c90e0b60065-kube-api-access-skjhh\") pod \"cert-manager-cainjector-5545bd876-xw652\" (UID: \"4c52a718-46bf-4c12-9775-4c90e0b60065\") " pod="cert-manager/cert-manager-cainjector-5545bd876-xw652" Feb 17 13:59:32 crc kubenswrapper[4833]: I0217 13:59:32.351690 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skjhh\" (UniqueName: \"kubernetes.io/projected/4c52a718-46bf-4c12-9775-4c90e0b60065-kube-api-access-skjhh\") pod \"cert-manager-cainjector-5545bd876-xw652\" (UID: \"4c52a718-46bf-4c12-9775-4c90e0b60065\") " pod="cert-manager/cert-manager-cainjector-5545bd876-xw652" Feb 17 13:59:32 crc kubenswrapper[4833]: I0217 13:59:32.352597 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4c52a718-46bf-4c12-9775-4c90e0b60065-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-xw652\" (UID: \"4c52a718-46bf-4c12-9775-4c90e0b60065\") " pod="cert-manager/cert-manager-cainjector-5545bd876-xw652" Feb 17 13:59:32 crc kubenswrapper[4833]: I0217 13:59:32.494162 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-xw652" Feb 17 13:59:32 crc kubenswrapper[4833]: I0217 13:59:32.751004 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-xw652"] Feb 17 13:59:32 crc kubenswrapper[4833]: W0217 13:59:32.754012 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c52a718_46bf_4c12_9775_4c90e0b60065.slice/crio-a248dc73156a78c474867dd676cb50d7e067575b0f3982122ab1f50ae8f4ac6b WatchSource:0}: Error finding container a248dc73156a78c474867dd676cb50d7e067575b0f3982122ab1f50ae8f4ac6b: Status 404 returned error can't find the container with id a248dc73156a78c474867dd676cb50d7e067575b0f3982122ab1f50ae8f4ac6b Feb 17 13:59:32 crc kubenswrapper[4833]: I0217 13:59:32.868099 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-xw652" event={"ID":"4c52a718-46bf-4c12-9775-4c90e0b60065","Type":"ContainerStarted","Data":"a248dc73156a78c474867dd676cb50d7e067575b0f3982122ab1f50ae8f4ac6b"} Feb 17 13:59:33 crc kubenswrapper[4833]: I0217 13:59:33.874286 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-xw652" event={"ID":"4c52a718-46bf-4c12-9775-4c90e0b60065","Type":"ContainerStarted","Data":"7006a6c6a78ddaaec16241956ac5d44d4083e3ce5d190f37dc75cb74b6fb0fd9"} Feb 17 13:59:33 crc kubenswrapper[4833]: I0217 13:59:33.911751 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-5545bd876-xw652" podStartSLOduration=1.91173383 podStartE2EDuration="1.91173383s" podCreationTimestamp="2026-02-17 13:59:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:59:33.910865065 +0000 UTC m=+863.545964518" watchObservedRunningTime="2026-02-17 13:59:33.91173383 +0000 UTC m=+863.546833263" Feb 17 13:59:35 crc kubenswrapper[4833]: I0217 13:59:35.020024 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-6888856db4-lqxtp" Feb 17 13:59:39 crc kubenswrapper[4833]: I0217 13:59:39.441045 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-j5vkm"] Feb 17 13:59:39 crc kubenswrapper[4833]: I0217 13:59:39.443344 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j5vkm" Feb 17 13:59:39 crc kubenswrapper[4833]: I0217 13:59:39.454201 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j5vkm"] Feb 17 13:59:39 crc kubenswrapper[4833]: I0217 13:59:39.547709 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34052ffc-4343-4af7-a722-f534edd3f1b6-catalog-content\") pod \"community-operators-j5vkm\" (UID: \"34052ffc-4343-4af7-a722-f534edd3f1b6\") " pod="openshift-marketplace/community-operators-j5vkm" Feb 17 13:59:39 crc kubenswrapper[4833]: I0217 13:59:39.547787 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34052ffc-4343-4af7-a722-f534edd3f1b6-utilities\") pod \"community-operators-j5vkm\" (UID: \"34052ffc-4343-4af7-a722-f534edd3f1b6\") " pod="openshift-marketplace/community-operators-j5vkm" Feb 17 13:59:39 crc kubenswrapper[4833]: I0217 13:59:39.547816 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52mj2\" (UniqueName: \"kubernetes.io/projected/34052ffc-4343-4af7-a722-f534edd3f1b6-kube-api-access-52mj2\") pod \"community-operators-j5vkm\" (UID: \"34052ffc-4343-4af7-a722-f534edd3f1b6\") " pod="openshift-marketplace/community-operators-j5vkm" Feb 17 13:59:39 crc kubenswrapper[4833]: I0217 13:59:39.648957 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34052ffc-4343-4af7-a722-f534edd3f1b6-catalog-content\") pod \"community-operators-j5vkm\" (UID: \"34052ffc-4343-4af7-a722-f534edd3f1b6\") " pod="openshift-marketplace/community-operators-j5vkm" Feb 17 13:59:39 crc kubenswrapper[4833]: I0217 13:59:39.649043 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34052ffc-4343-4af7-a722-f534edd3f1b6-utilities\") pod \"community-operators-j5vkm\" (UID: \"34052ffc-4343-4af7-a722-f534edd3f1b6\") " pod="openshift-marketplace/community-operators-j5vkm" Feb 17 13:59:39 crc kubenswrapper[4833]: I0217 13:59:39.649088 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52mj2\" (UniqueName: \"kubernetes.io/projected/34052ffc-4343-4af7-a722-f534edd3f1b6-kube-api-access-52mj2\") pod \"community-operators-j5vkm\" (UID: \"34052ffc-4343-4af7-a722-f534edd3f1b6\") " pod="openshift-marketplace/community-operators-j5vkm" Feb 17 13:59:39 crc kubenswrapper[4833]: I0217 13:59:39.649631 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34052ffc-4343-4af7-a722-f534edd3f1b6-utilities\") pod \"community-operators-j5vkm\" (UID: \"34052ffc-4343-4af7-a722-f534edd3f1b6\") " pod="openshift-marketplace/community-operators-j5vkm" Feb 17 13:59:39 crc kubenswrapper[4833]: I0217 13:59:39.649878 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34052ffc-4343-4af7-a722-f534edd3f1b6-catalog-content\") pod \"community-operators-j5vkm\" (UID: \"34052ffc-4343-4af7-a722-f534edd3f1b6\") " pod="openshift-marketplace/community-operators-j5vkm" Feb 17 13:59:39 crc kubenswrapper[4833]: I0217 13:59:39.667577 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52mj2\" (UniqueName: \"kubernetes.io/projected/34052ffc-4343-4af7-a722-f534edd3f1b6-kube-api-access-52mj2\") pod \"community-operators-j5vkm\" (UID: \"34052ffc-4343-4af7-a722-f534edd3f1b6\") " pod="openshift-marketplace/community-operators-j5vkm" Feb 17 13:59:39 crc kubenswrapper[4833]: I0217 13:59:39.785718 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j5vkm" Feb 17 13:59:40 crc kubenswrapper[4833]: I0217 13:59:40.280355 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j5vkm"] Feb 17 13:59:40 crc kubenswrapper[4833]: I0217 13:59:40.929319 4833 generic.go:334] "Generic (PLEG): container finished" podID="34052ffc-4343-4af7-a722-f534edd3f1b6" containerID="4459183b6ad177f835b7ee9594d62203a35dc70b9ab510f66bdeb8d6313debc7" exitCode=0 Feb 17 13:59:40 crc kubenswrapper[4833]: I0217 13:59:40.929434 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j5vkm" event={"ID":"34052ffc-4343-4af7-a722-f534edd3f1b6","Type":"ContainerDied","Data":"4459183b6ad177f835b7ee9594d62203a35dc70b9ab510f66bdeb8d6313debc7"} Feb 17 13:59:40 crc kubenswrapper[4833]: I0217 13:59:40.929822 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j5vkm" event={"ID":"34052ffc-4343-4af7-a722-f534edd3f1b6","Type":"ContainerStarted","Data":"9f991a4b5f71acc6684c5c1a7df036f21895bb020d1145d4d9efb69259aae9a4"} Feb 17 13:59:41 crc kubenswrapper[4833]: I0217 13:59:41.936560 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j5vkm" event={"ID":"34052ffc-4343-4af7-a722-f534edd3f1b6","Type":"ContainerStarted","Data":"518614c325d9d90c0949c4f6a677b54ef7a5a1c32643e70e3b18b1c5fca27938"} Feb 17 13:59:42 crc kubenswrapper[4833]: I0217 13:59:42.613797 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-d2w92"] Feb 17 13:59:42 crc kubenswrapper[4833]: I0217 13:59:42.615037 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d2w92" Feb 17 13:59:42 crc kubenswrapper[4833]: I0217 13:59:42.637248 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d2w92"] Feb 17 13:59:42 crc kubenswrapper[4833]: I0217 13:59:42.696497 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cde6a56e-2c25-437d-94ab-8838c987c3bb-catalog-content\") pod \"certified-operators-d2w92\" (UID: \"cde6a56e-2c25-437d-94ab-8838c987c3bb\") " pod="openshift-marketplace/certified-operators-d2w92" Feb 17 13:59:42 crc kubenswrapper[4833]: I0217 13:59:42.696582 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9fkm\" (UniqueName: \"kubernetes.io/projected/cde6a56e-2c25-437d-94ab-8838c987c3bb-kube-api-access-c9fkm\") pod \"certified-operators-d2w92\" (UID: \"cde6a56e-2c25-437d-94ab-8838c987c3bb\") " pod="openshift-marketplace/certified-operators-d2w92" Feb 17 13:59:42 crc kubenswrapper[4833]: I0217 13:59:42.696603 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cde6a56e-2c25-437d-94ab-8838c987c3bb-utilities\") pod \"certified-operators-d2w92\" (UID: \"cde6a56e-2c25-437d-94ab-8838c987c3bb\") " pod="openshift-marketplace/certified-operators-d2w92" Feb 17 13:59:42 crc kubenswrapper[4833]: I0217 13:59:42.797368 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cde6a56e-2c25-437d-94ab-8838c987c3bb-catalog-content\") pod \"certified-operators-d2w92\" (UID: \"cde6a56e-2c25-437d-94ab-8838c987c3bb\") " pod="openshift-marketplace/certified-operators-d2w92" Feb 17 13:59:42 crc kubenswrapper[4833]: I0217 13:59:42.797442 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9fkm\" (UniqueName: \"kubernetes.io/projected/cde6a56e-2c25-437d-94ab-8838c987c3bb-kube-api-access-c9fkm\") pod \"certified-operators-d2w92\" (UID: \"cde6a56e-2c25-437d-94ab-8838c987c3bb\") " pod="openshift-marketplace/certified-operators-d2w92" Feb 17 13:59:42 crc kubenswrapper[4833]: I0217 13:59:42.797465 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cde6a56e-2c25-437d-94ab-8838c987c3bb-utilities\") pod \"certified-operators-d2w92\" (UID: \"cde6a56e-2c25-437d-94ab-8838c987c3bb\") " pod="openshift-marketplace/certified-operators-d2w92" Feb 17 13:59:42 crc kubenswrapper[4833]: I0217 13:59:42.797912 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cde6a56e-2c25-437d-94ab-8838c987c3bb-catalog-content\") pod \"certified-operators-d2w92\" (UID: \"cde6a56e-2c25-437d-94ab-8838c987c3bb\") " pod="openshift-marketplace/certified-operators-d2w92" Feb 17 13:59:42 crc kubenswrapper[4833]: I0217 13:59:42.797941 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cde6a56e-2c25-437d-94ab-8838c987c3bb-utilities\") pod \"certified-operators-d2w92\" (UID: \"cde6a56e-2c25-437d-94ab-8838c987c3bb\") " pod="openshift-marketplace/certified-operators-d2w92" Feb 17 13:59:42 crc kubenswrapper[4833]: I0217 13:59:42.818713 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9fkm\" (UniqueName: \"kubernetes.io/projected/cde6a56e-2c25-437d-94ab-8838c987c3bb-kube-api-access-c9fkm\") pod \"certified-operators-d2w92\" (UID: \"cde6a56e-2c25-437d-94ab-8838c987c3bb\") " pod="openshift-marketplace/certified-operators-d2w92" Feb 17 13:59:42 crc kubenswrapper[4833]: I0217 13:59:42.932756 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d2w92" Feb 17 13:59:42 crc kubenswrapper[4833]: I0217 13:59:42.943404 4833 generic.go:334] "Generic (PLEG): container finished" podID="34052ffc-4343-4af7-a722-f534edd3f1b6" containerID="518614c325d9d90c0949c4f6a677b54ef7a5a1c32643e70e3b18b1c5fca27938" exitCode=0 Feb 17 13:59:42 crc kubenswrapper[4833]: I0217 13:59:42.943439 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j5vkm" event={"ID":"34052ffc-4343-4af7-a722-f534edd3f1b6","Type":"ContainerDied","Data":"518614c325d9d90c0949c4f6a677b54ef7a5a1c32643e70e3b18b1c5fca27938"} Feb 17 13:59:43 crc kubenswrapper[4833]: I0217 13:59:43.388997 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d2w92"] Feb 17 13:59:43 crc kubenswrapper[4833]: W0217 13:59:43.400202 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcde6a56e_2c25_437d_94ab_8838c987c3bb.slice/crio-f1cf64ad1f05ee811946bbd603cb0b28f4a86e8adaa7619dc827ca11c6e035f8 WatchSource:0}: Error finding container f1cf64ad1f05ee811946bbd603cb0b28f4a86e8adaa7619dc827ca11c6e035f8: Status 404 returned error can't find the container with id f1cf64ad1f05ee811946bbd603cb0b28f4a86e8adaa7619dc827ca11c6e035f8 Feb 17 13:59:43 crc kubenswrapper[4833]: I0217 13:59:43.952183 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j5vkm" event={"ID":"34052ffc-4343-4af7-a722-f534edd3f1b6","Type":"ContainerStarted","Data":"3077fef1c664fc8aa1ac44d30078212b63dac4cf6bcef757046f1f1e7e7105f7"} Feb 17 13:59:43 crc kubenswrapper[4833]: I0217 13:59:43.954345 4833 generic.go:334] "Generic (PLEG): container finished" podID="cde6a56e-2c25-437d-94ab-8838c987c3bb" containerID="99e88f1b87d48f9400a479d48606a9d785bbddcb81dc4dbcf4a3839395a734fe" exitCode=0 Feb 17 13:59:43 crc kubenswrapper[4833]: I0217 13:59:43.954390 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2w92" event={"ID":"cde6a56e-2c25-437d-94ab-8838c987c3bb","Type":"ContainerDied","Data":"99e88f1b87d48f9400a479d48606a9d785bbddcb81dc4dbcf4a3839395a734fe"} Feb 17 13:59:43 crc kubenswrapper[4833]: I0217 13:59:43.954433 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2w92" event={"ID":"cde6a56e-2c25-437d-94ab-8838c987c3bb","Type":"ContainerStarted","Data":"f1cf64ad1f05ee811946bbd603cb0b28f4a86e8adaa7619dc827ca11c6e035f8"} Feb 17 13:59:43 crc kubenswrapper[4833]: I0217 13:59:43.977082 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-j5vkm" podStartSLOduration=2.553367077 podStartE2EDuration="4.977063211s" podCreationTimestamp="2026-02-17 13:59:39 +0000 UTC" firstStartedPulling="2026-02-17 13:59:40.931813178 +0000 UTC m=+870.566912621" lastFinishedPulling="2026-02-17 13:59:43.355509322 +0000 UTC m=+872.990608755" observedRunningTime="2026-02-17 13:59:43.97633184 +0000 UTC m=+873.611431303" watchObservedRunningTime="2026-02-17 13:59:43.977063211 +0000 UTC m=+873.612162644" Feb 17 13:59:44 crc kubenswrapper[4833]: I0217 13:59:44.244490 4833 patch_prober.go:28] interesting pod/machine-config-daemon-nmzvl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 13:59:44 crc kubenswrapper[4833]: I0217 13:59:44.244556 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" podUID="f4a1ca83-1919-4f9c-82de-c849cbd50e70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 13:59:44 crc kubenswrapper[4833]: I0217 13:59:44.968282 4833 generic.go:334] "Generic (PLEG): container finished" podID="cde6a56e-2c25-437d-94ab-8838c987c3bb" containerID="bac0b4d27b9f6d7335705b974cd5255002653f45aa9a5529a52cb381437d21d2" exitCode=0 Feb 17 13:59:44 crc kubenswrapper[4833]: I0217 13:59:44.968500 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2w92" event={"ID":"cde6a56e-2c25-437d-94ab-8838c987c3bb","Type":"ContainerDied","Data":"bac0b4d27b9f6d7335705b974cd5255002653f45aa9a5529a52cb381437d21d2"} Feb 17 13:59:45 crc kubenswrapper[4833]: I0217 13:59:45.503003 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-545d4d4674-pv7vq"] Feb 17 13:59:45 crc kubenswrapper[4833]: I0217 13:59:45.504219 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-pv7vq" Feb 17 13:59:45 crc kubenswrapper[4833]: I0217 13:59:45.506974 4833 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-2cvst" Feb 17 13:59:45 crc kubenswrapper[4833]: I0217 13:59:45.513919 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-pv7vq"] Feb 17 13:59:45 crc kubenswrapper[4833]: I0217 13:59:45.630858 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3d30ed8b-6183-40ff-89c2-869d421aaf38-bound-sa-token\") pod \"cert-manager-545d4d4674-pv7vq\" (UID: \"3d30ed8b-6183-40ff-89c2-869d421aaf38\") " pod="cert-manager/cert-manager-545d4d4674-pv7vq" Feb 17 13:59:45 crc kubenswrapper[4833]: I0217 13:59:45.630932 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22fw9\" (UniqueName: \"kubernetes.io/projected/3d30ed8b-6183-40ff-89c2-869d421aaf38-kube-api-access-22fw9\") pod \"cert-manager-545d4d4674-pv7vq\" (UID: \"3d30ed8b-6183-40ff-89c2-869d421aaf38\") " pod="cert-manager/cert-manager-545d4d4674-pv7vq" Feb 17 13:59:45 crc kubenswrapper[4833]: I0217 13:59:45.732275 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3d30ed8b-6183-40ff-89c2-869d421aaf38-bound-sa-token\") pod \"cert-manager-545d4d4674-pv7vq\" (UID: \"3d30ed8b-6183-40ff-89c2-869d421aaf38\") " pod="cert-manager/cert-manager-545d4d4674-pv7vq" Feb 17 13:59:45 crc kubenswrapper[4833]: I0217 13:59:45.732325 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22fw9\" (UniqueName: \"kubernetes.io/projected/3d30ed8b-6183-40ff-89c2-869d421aaf38-kube-api-access-22fw9\") pod \"cert-manager-545d4d4674-pv7vq\" (UID: \"3d30ed8b-6183-40ff-89c2-869d421aaf38\") " pod="cert-manager/cert-manager-545d4d4674-pv7vq" Feb 17 13:59:45 crc kubenswrapper[4833]: I0217 13:59:45.754267 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3d30ed8b-6183-40ff-89c2-869d421aaf38-bound-sa-token\") pod \"cert-manager-545d4d4674-pv7vq\" (UID: \"3d30ed8b-6183-40ff-89c2-869d421aaf38\") " pod="cert-manager/cert-manager-545d4d4674-pv7vq" Feb 17 13:59:45 crc kubenswrapper[4833]: I0217 13:59:45.759329 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22fw9\" (UniqueName: \"kubernetes.io/projected/3d30ed8b-6183-40ff-89c2-869d421aaf38-kube-api-access-22fw9\") pod \"cert-manager-545d4d4674-pv7vq\" (UID: \"3d30ed8b-6183-40ff-89c2-869d421aaf38\") " pod="cert-manager/cert-manager-545d4d4674-pv7vq" Feb 17 13:59:45 crc kubenswrapper[4833]: I0217 13:59:45.819864 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-pv7vq" Feb 17 13:59:45 crc kubenswrapper[4833]: I0217 13:59:45.980327 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2w92" event={"ID":"cde6a56e-2c25-437d-94ab-8838c987c3bb","Type":"ContainerStarted","Data":"dae51c42b18f488dbb40634c83346231ddd4de52abae42c2ae21e42aba638d7a"} Feb 17 13:59:45 crc kubenswrapper[4833]: I0217 13:59:45.998692 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-d2w92" podStartSLOduration=2.617123058 podStartE2EDuration="3.998671101s" podCreationTimestamp="2026-02-17 13:59:42 +0000 UTC" firstStartedPulling="2026-02-17 13:59:43.955236477 +0000 UTC m=+873.590335920" lastFinishedPulling="2026-02-17 13:59:45.33678453 +0000 UTC m=+874.971883963" observedRunningTime="2026-02-17 13:59:45.997068605 +0000 UTC m=+875.632168038" watchObservedRunningTime="2026-02-17 13:59:45.998671101 +0000 UTC m=+875.633770544" Feb 17 13:59:46 crc kubenswrapper[4833]: I0217 13:59:46.275919 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-pv7vq"] Feb 17 13:59:46 crc kubenswrapper[4833]: I0217 13:59:46.988906 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-pv7vq" event={"ID":"3d30ed8b-6183-40ff-89c2-869d421aaf38","Type":"ContainerStarted","Data":"b86df22c1d599499e548ac8e91c608835c3988d1328db1c6e305593d6e307c00"} Feb 17 13:59:46 crc kubenswrapper[4833]: I0217 13:59:46.989304 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-pv7vq" event={"ID":"3d30ed8b-6183-40ff-89c2-869d421aaf38","Type":"ContainerStarted","Data":"f0bf8a96563f5122283f615d4b51c3b85217d354eab3971ed527bbdc0109bfbb"} Feb 17 13:59:47 crc kubenswrapper[4833]: I0217 13:59:47.012513 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-545d4d4674-pv7vq" podStartSLOduration=2.012473292 podStartE2EDuration="2.012473292s" podCreationTimestamp="2026-02-17 13:59:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:59:47.012369379 +0000 UTC m=+876.647468812" watchObservedRunningTime="2026-02-17 13:59:47.012473292 +0000 UTC m=+876.647572765" Feb 17 13:59:49 crc kubenswrapper[4833]: I0217 13:59:49.008795 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8dmth"] Feb 17 13:59:49 crc kubenswrapper[4833]: I0217 13:59:49.010736 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8dmth" Feb 17 13:59:49 crc kubenswrapper[4833]: I0217 13:59:49.027269 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8dmth"] Feb 17 13:59:49 crc kubenswrapper[4833]: I0217 13:59:49.181578 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f1ff29e-a95d-4cf6-9d33-5378e2b960c8-utilities\") pod \"redhat-marketplace-8dmth\" (UID: \"1f1ff29e-a95d-4cf6-9d33-5378e2b960c8\") " pod="openshift-marketplace/redhat-marketplace-8dmth" Feb 17 13:59:49 crc kubenswrapper[4833]: I0217 13:59:49.181652 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grjl8\" (UniqueName: \"kubernetes.io/projected/1f1ff29e-a95d-4cf6-9d33-5378e2b960c8-kube-api-access-grjl8\") pod \"redhat-marketplace-8dmth\" (UID: \"1f1ff29e-a95d-4cf6-9d33-5378e2b960c8\") " pod="openshift-marketplace/redhat-marketplace-8dmth" Feb 17 13:59:49 crc kubenswrapper[4833]: I0217 13:59:49.181684 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f1ff29e-a95d-4cf6-9d33-5378e2b960c8-catalog-content\") pod \"redhat-marketplace-8dmth\" (UID: \"1f1ff29e-a95d-4cf6-9d33-5378e2b960c8\") " pod="openshift-marketplace/redhat-marketplace-8dmth" Feb 17 13:59:49 crc kubenswrapper[4833]: I0217 13:59:49.283007 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f1ff29e-a95d-4cf6-9d33-5378e2b960c8-utilities\") pod \"redhat-marketplace-8dmth\" (UID: \"1f1ff29e-a95d-4cf6-9d33-5378e2b960c8\") " pod="openshift-marketplace/redhat-marketplace-8dmth" Feb 17 13:59:49 crc kubenswrapper[4833]: I0217 13:59:49.283720 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grjl8\" (UniqueName: \"kubernetes.io/projected/1f1ff29e-a95d-4cf6-9d33-5378e2b960c8-kube-api-access-grjl8\") pod \"redhat-marketplace-8dmth\" (UID: \"1f1ff29e-a95d-4cf6-9d33-5378e2b960c8\") " pod="openshift-marketplace/redhat-marketplace-8dmth" Feb 17 13:59:49 crc kubenswrapper[4833]: I0217 13:59:49.284127 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f1ff29e-a95d-4cf6-9d33-5378e2b960c8-catalog-content\") pod \"redhat-marketplace-8dmth\" (UID: \"1f1ff29e-a95d-4cf6-9d33-5378e2b960c8\") " pod="openshift-marketplace/redhat-marketplace-8dmth" Feb 17 13:59:49 crc kubenswrapper[4833]: I0217 13:59:49.283737 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f1ff29e-a95d-4cf6-9d33-5378e2b960c8-utilities\") pod \"redhat-marketplace-8dmth\" (UID: \"1f1ff29e-a95d-4cf6-9d33-5378e2b960c8\") " pod="openshift-marketplace/redhat-marketplace-8dmth" Feb 17 13:59:49 crc kubenswrapper[4833]: I0217 13:59:49.284533 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f1ff29e-a95d-4cf6-9d33-5378e2b960c8-catalog-content\") pod \"redhat-marketplace-8dmth\" (UID: \"1f1ff29e-a95d-4cf6-9d33-5378e2b960c8\") " pod="openshift-marketplace/redhat-marketplace-8dmth" Feb 17 13:59:49 crc kubenswrapper[4833]: I0217 13:59:49.314841 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grjl8\" (UniqueName: \"kubernetes.io/projected/1f1ff29e-a95d-4cf6-9d33-5378e2b960c8-kube-api-access-grjl8\") pod \"redhat-marketplace-8dmth\" (UID: \"1f1ff29e-a95d-4cf6-9d33-5378e2b960c8\") " pod="openshift-marketplace/redhat-marketplace-8dmth" Feb 17 13:59:49 crc kubenswrapper[4833]: I0217 13:59:49.329772 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8dmth" Feb 17 13:59:49 crc kubenswrapper[4833]: I0217 13:59:49.786069 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-j5vkm" Feb 17 13:59:49 crc kubenswrapper[4833]: I0217 13:59:49.786442 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-j5vkm" Feb 17 13:59:49 crc kubenswrapper[4833]: I0217 13:59:49.821736 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8dmth"] Feb 17 13:59:49 crc kubenswrapper[4833]: W0217 13:59:49.838251 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f1ff29e_a95d_4cf6_9d33_5378e2b960c8.slice/crio-d0e08179625ffb19a900d058416438f451a77653a2166578e54f807cb39df66c WatchSource:0}: Error finding container d0e08179625ffb19a900d058416438f451a77653a2166578e54f807cb39df66c: Status 404 returned error can't find the container with id d0e08179625ffb19a900d058416438f451a77653a2166578e54f807cb39df66c Feb 17 13:59:49 crc kubenswrapper[4833]: I0217 13:59:49.844225 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-j5vkm" Feb 17 13:59:50 crc kubenswrapper[4833]: I0217 13:59:50.011195 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8dmth" event={"ID":"1f1ff29e-a95d-4cf6-9d33-5378e2b960c8","Type":"ContainerStarted","Data":"d0e08179625ffb19a900d058416438f451a77653a2166578e54f807cb39df66c"} Feb 17 13:59:50 crc kubenswrapper[4833]: I0217 13:59:50.061696 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-j5vkm" Feb 17 13:59:51 crc kubenswrapper[4833]: I0217 13:59:51.017894 4833 generic.go:334] "Generic (PLEG): container finished" podID="1f1ff29e-a95d-4cf6-9d33-5378e2b960c8" containerID="1333efa37c5a43d26914fe63a5a9e0637e74178a996cedf03505a515f6211437" exitCode=0 Feb 17 13:59:51 crc kubenswrapper[4833]: I0217 13:59:51.017981 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8dmth" event={"ID":"1f1ff29e-a95d-4cf6-9d33-5378e2b960c8","Type":"ContainerDied","Data":"1333efa37c5a43d26914fe63a5a9e0637e74178a996cedf03505a515f6211437"} Feb 17 13:59:52 crc kubenswrapper[4833]: I0217 13:59:52.027637 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8dmth" event={"ID":"1f1ff29e-a95d-4cf6-9d33-5378e2b960c8","Type":"ContainerStarted","Data":"47383ae067a56ae65bab4d0a346570f90efcda57ff3f32f4a96907fddae7077c"} Feb 17 13:59:52 crc kubenswrapper[4833]: I0217 13:59:52.933583 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-d2w92" Feb 17 13:59:52 crc kubenswrapper[4833]: I0217 13:59:52.933662 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-d2w92" Feb 17 13:59:52 crc kubenswrapper[4833]: I0217 13:59:52.985808 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-d2w92" Feb 17 13:59:53 crc kubenswrapper[4833]: I0217 13:59:53.036482 4833 generic.go:334] "Generic (PLEG): container finished" podID="1f1ff29e-a95d-4cf6-9d33-5378e2b960c8" containerID="47383ae067a56ae65bab4d0a346570f90efcda57ff3f32f4a96907fddae7077c" exitCode=0 Feb 17 13:59:53 crc kubenswrapper[4833]: I0217 13:59:53.036539 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8dmth" event={"ID":"1f1ff29e-a95d-4cf6-9d33-5378e2b960c8","Type":"ContainerDied","Data":"47383ae067a56ae65bab4d0a346570f90efcda57ff3f32f4a96907fddae7077c"} Feb 17 13:59:53 crc kubenswrapper[4833]: I0217 13:59:53.081122 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-d2w92" Feb 17 13:59:53 crc kubenswrapper[4833]: I0217 13:59:53.407896 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-54l4d"] Feb 17 13:59:53 crc kubenswrapper[4833]: I0217 13:59:53.409287 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-54l4d" Feb 17 13:59:53 crc kubenswrapper[4833]: I0217 13:59:53.412517 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-47vzh" Feb 17 13:59:53 crc kubenswrapper[4833]: I0217 13:59:53.412626 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 17 13:59:53 crc kubenswrapper[4833]: I0217 13:59:53.412636 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 17 13:59:53 crc kubenswrapper[4833]: I0217 13:59:53.421769 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-54l4d"] Feb 17 13:59:53 crc kubenswrapper[4833]: I0217 13:59:53.561601 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bct9h\" (UniqueName: \"kubernetes.io/projected/8a60a0a6-6ea5-4412-9ff0-b384e78d3c47-kube-api-access-bct9h\") pod \"openstack-operator-index-54l4d\" (UID: \"8a60a0a6-6ea5-4412-9ff0-b384e78d3c47\") " pod="openstack-operators/openstack-operator-index-54l4d" Feb 17 13:59:53 crc kubenswrapper[4833]: I0217 13:59:53.662836 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bct9h\" (UniqueName: \"kubernetes.io/projected/8a60a0a6-6ea5-4412-9ff0-b384e78d3c47-kube-api-access-bct9h\") pod \"openstack-operator-index-54l4d\" (UID: \"8a60a0a6-6ea5-4412-9ff0-b384e78d3c47\") " pod="openstack-operators/openstack-operator-index-54l4d" Feb 17 13:59:53 crc kubenswrapper[4833]: I0217 13:59:53.699683 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bct9h\" (UniqueName: \"kubernetes.io/projected/8a60a0a6-6ea5-4412-9ff0-b384e78d3c47-kube-api-access-bct9h\") pod \"openstack-operator-index-54l4d\" (UID: \"8a60a0a6-6ea5-4412-9ff0-b384e78d3c47\") " pod="openstack-operators/openstack-operator-index-54l4d" Feb 17 13:59:53 crc kubenswrapper[4833]: I0217 13:59:53.732777 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-54l4d" Feb 17 13:59:54 crc kubenswrapper[4833]: I0217 13:59:54.044354 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8dmth" event={"ID":"1f1ff29e-a95d-4cf6-9d33-5378e2b960c8","Type":"ContainerStarted","Data":"ad3c45dd7a4e38fccc643c3a4dd76bb2dc2e1401d9e5d278118d5df95e6df484"} Feb 17 13:59:54 crc kubenswrapper[4833]: I0217 13:59:54.062768 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8dmth" podStartSLOduration=3.574405591 podStartE2EDuration="6.062752014s" podCreationTimestamp="2026-02-17 13:59:48 +0000 UTC" firstStartedPulling="2026-02-17 13:59:51.019773776 +0000 UTC m=+880.654873209" lastFinishedPulling="2026-02-17 13:59:53.508120199 +0000 UTC m=+883.143219632" observedRunningTime="2026-02-17 13:59:54.05877247 +0000 UTC m=+883.693871913" watchObservedRunningTime="2026-02-17 13:59:54.062752014 +0000 UTC m=+883.697851447" Feb 17 13:59:54 crc kubenswrapper[4833]: W0217 13:59:54.249613 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a60a0a6_6ea5_4412_9ff0_b384e78d3c47.slice/crio-f5f36abd9b5c2f899870f526fd57e1e1d9ebb0cc5925e80a6dd885986a392f88 WatchSource:0}: Error finding container f5f36abd9b5c2f899870f526fd57e1e1d9ebb0cc5925e80a6dd885986a392f88: Status 404 returned error can't find the container with id f5f36abd9b5c2f899870f526fd57e1e1d9ebb0cc5925e80a6dd885986a392f88 Feb 17 13:59:54 crc kubenswrapper[4833]: I0217 13:59:54.252366 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-54l4d"] Feb 17 13:59:54 crc kubenswrapper[4833]: I0217 13:59:54.601309 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j5vkm"] Feb 17 13:59:54 crc kubenswrapper[4833]: I0217 13:59:54.601557 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-j5vkm" podUID="34052ffc-4343-4af7-a722-f534edd3f1b6" containerName="registry-server" containerID="cri-o://3077fef1c664fc8aa1ac44d30078212b63dac4cf6bcef757046f1f1e7e7105f7" gracePeriod=2 Feb 17 13:59:55 crc kubenswrapper[4833]: I0217 13:59:55.056550 4833 generic.go:334] "Generic (PLEG): container finished" podID="34052ffc-4343-4af7-a722-f534edd3f1b6" containerID="3077fef1c664fc8aa1ac44d30078212b63dac4cf6bcef757046f1f1e7e7105f7" exitCode=0 Feb 17 13:59:55 crc kubenswrapper[4833]: I0217 13:59:55.056704 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j5vkm" event={"ID":"34052ffc-4343-4af7-a722-f534edd3f1b6","Type":"ContainerDied","Data":"3077fef1c664fc8aa1ac44d30078212b63dac4cf6bcef757046f1f1e7e7105f7"} Feb 17 13:59:55 crc kubenswrapper[4833]: I0217 13:59:55.058407 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-54l4d" event={"ID":"8a60a0a6-6ea5-4412-9ff0-b384e78d3c47","Type":"ContainerStarted","Data":"f5f36abd9b5c2f899870f526fd57e1e1d9ebb0cc5925e80a6dd885986a392f88"} Feb 17 13:59:55 crc kubenswrapper[4833]: I0217 13:59:55.153951 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j5vkm" Feb 17 13:59:55 crc kubenswrapper[4833]: I0217 13:59:55.284216 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34052ffc-4343-4af7-a722-f534edd3f1b6-utilities\") pod \"34052ffc-4343-4af7-a722-f534edd3f1b6\" (UID: \"34052ffc-4343-4af7-a722-f534edd3f1b6\") " Feb 17 13:59:55 crc kubenswrapper[4833]: I0217 13:59:55.284383 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52mj2\" (UniqueName: \"kubernetes.io/projected/34052ffc-4343-4af7-a722-f534edd3f1b6-kube-api-access-52mj2\") pod \"34052ffc-4343-4af7-a722-f534edd3f1b6\" (UID: \"34052ffc-4343-4af7-a722-f534edd3f1b6\") " Feb 17 13:59:55 crc kubenswrapper[4833]: I0217 13:59:55.284416 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34052ffc-4343-4af7-a722-f534edd3f1b6-catalog-content\") pod \"34052ffc-4343-4af7-a722-f534edd3f1b6\" (UID: \"34052ffc-4343-4af7-a722-f534edd3f1b6\") " Feb 17 13:59:55 crc kubenswrapper[4833]: I0217 13:59:55.285775 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34052ffc-4343-4af7-a722-f534edd3f1b6-utilities" (OuterVolumeSpecName: "utilities") pod "34052ffc-4343-4af7-a722-f534edd3f1b6" (UID: "34052ffc-4343-4af7-a722-f534edd3f1b6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:59:55 crc kubenswrapper[4833]: I0217 13:59:55.293386 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34052ffc-4343-4af7-a722-f534edd3f1b6-kube-api-access-52mj2" (OuterVolumeSpecName: "kube-api-access-52mj2") pod "34052ffc-4343-4af7-a722-f534edd3f1b6" (UID: "34052ffc-4343-4af7-a722-f534edd3f1b6"). InnerVolumeSpecName "kube-api-access-52mj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:59:55 crc kubenswrapper[4833]: I0217 13:59:55.350233 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34052ffc-4343-4af7-a722-f534edd3f1b6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "34052ffc-4343-4af7-a722-f534edd3f1b6" (UID: "34052ffc-4343-4af7-a722-f534edd3f1b6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:59:55 crc kubenswrapper[4833]: I0217 13:59:55.385738 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34052ffc-4343-4af7-a722-f534edd3f1b6-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 13:59:55 crc kubenswrapper[4833]: I0217 13:59:55.385773 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52mj2\" (UniqueName: \"kubernetes.io/projected/34052ffc-4343-4af7-a722-f534edd3f1b6-kube-api-access-52mj2\") on node \"crc\" DevicePath \"\"" Feb 17 13:59:55 crc kubenswrapper[4833]: I0217 13:59:55.385782 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34052ffc-4343-4af7-a722-f534edd3f1b6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 13:59:56 crc kubenswrapper[4833]: I0217 13:59:56.068901 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j5vkm" event={"ID":"34052ffc-4343-4af7-a722-f534edd3f1b6","Type":"ContainerDied","Data":"9f991a4b5f71acc6684c5c1a7df036f21895bb020d1145d4d9efb69259aae9a4"} Feb 17 13:59:56 crc kubenswrapper[4833]: I0217 13:59:56.068957 4833 scope.go:117] "RemoveContainer" containerID="3077fef1c664fc8aa1ac44d30078212b63dac4cf6bcef757046f1f1e7e7105f7" Feb 17 13:59:56 crc kubenswrapper[4833]: I0217 13:59:56.068970 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j5vkm" Feb 17 13:59:56 crc kubenswrapper[4833]: I0217 13:59:56.098668 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j5vkm"] Feb 17 13:59:56 crc kubenswrapper[4833]: I0217 13:59:56.105731 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-j5vkm"] Feb 17 13:59:56 crc kubenswrapper[4833]: I0217 13:59:56.205896 4833 scope.go:117] "RemoveContainer" containerID="518614c325d9d90c0949c4f6a677b54ef7a5a1c32643e70e3b18b1c5fca27938" Feb 17 13:59:56 crc kubenswrapper[4833]: I0217 13:59:56.746028 4833 scope.go:117] "RemoveContainer" containerID="4459183b6ad177f835b7ee9594d62203a35dc70b9ab510f66bdeb8d6313debc7" Feb 17 13:59:57 crc kubenswrapper[4833]: I0217 13:59:57.049110 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34052ffc-4343-4af7-a722-f534edd3f1b6" path="/var/lib/kubelet/pods/34052ffc-4343-4af7-a722-f534edd3f1b6/volumes" Feb 17 13:59:57 crc kubenswrapper[4833]: I0217 13:59:57.075537 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-54l4d" event={"ID":"8a60a0a6-6ea5-4412-9ff0-b384e78d3c47","Type":"ContainerStarted","Data":"5b4dd617def1e12bd8ae0843936c7e987de4d06fed9d0193cd13ca86a0fe9d76"} Feb 17 13:59:57 crc kubenswrapper[4833]: I0217 13:59:57.092101 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-54l4d" podStartSLOduration=1.46913044 podStartE2EDuration="4.092083491s" podCreationTimestamp="2026-02-17 13:59:53 +0000 UTC" firstStartedPulling="2026-02-17 13:59:54.253586519 +0000 UTC m=+883.888685952" lastFinishedPulling="2026-02-17 13:59:56.87653957 +0000 UTC m=+886.511639003" observedRunningTime="2026-02-17 13:59:57.091419722 +0000 UTC m=+886.726519165" watchObservedRunningTime="2026-02-17 13:59:57.092083491 +0000 UTC m=+886.727182924" Feb 17 13:59:58 crc kubenswrapper[4833]: I0217 13:59:58.997049 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d2w92"] Feb 17 13:59:58 crc kubenswrapper[4833]: I0217 13:59:58.997526 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-d2w92" podUID="cde6a56e-2c25-437d-94ab-8838c987c3bb" containerName="registry-server" containerID="cri-o://dae51c42b18f488dbb40634c83346231ddd4de52abae42c2ae21e42aba638d7a" gracePeriod=2 Feb 17 13:59:59 crc kubenswrapper[4833]: I0217 13:59:59.330631 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8dmth" Feb 17 13:59:59 crc kubenswrapper[4833]: I0217 13:59:59.330688 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8dmth" Feb 17 13:59:59 crc kubenswrapper[4833]: I0217 13:59:59.340516 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d2w92" Feb 17 13:59:59 crc kubenswrapper[4833]: I0217 13:59:59.378793 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8dmth" Feb 17 13:59:59 crc kubenswrapper[4833]: I0217 13:59:59.445829 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cde6a56e-2c25-437d-94ab-8838c987c3bb-utilities\") pod \"cde6a56e-2c25-437d-94ab-8838c987c3bb\" (UID: \"cde6a56e-2c25-437d-94ab-8838c987c3bb\") " Feb 17 13:59:59 crc kubenswrapper[4833]: I0217 13:59:59.445972 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9fkm\" (UniqueName: \"kubernetes.io/projected/cde6a56e-2c25-437d-94ab-8838c987c3bb-kube-api-access-c9fkm\") pod \"cde6a56e-2c25-437d-94ab-8838c987c3bb\" (UID: \"cde6a56e-2c25-437d-94ab-8838c987c3bb\") " Feb 17 13:59:59 crc kubenswrapper[4833]: I0217 13:59:59.446017 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cde6a56e-2c25-437d-94ab-8838c987c3bb-catalog-content\") pod \"cde6a56e-2c25-437d-94ab-8838c987c3bb\" (UID: \"cde6a56e-2c25-437d-94ab-8838c987c3bb\") " Feb 17 13:59:59 crc kubenswrapper[4833]: I0217 13:59:59.446773 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cde6a56e-2c25-437d-94ab-8838c987c3bb-utilities" (OuterVolumeSpecName: "utilities") pod "cde6a56e-2c25-437d-94ab-8838c987c3bb" (UID: "cde6a56e-2c25-437d-94ab-8838c987c3bb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:59:59 crc kubenswrapper[4833]: I0217 13:59:59.450994 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cde6a56e-2c25-437d-94ab-8838c987c3bb-kube-api-access-c9fkm" (OuterVolumeSpecName: "kube-api-access-c9fkm") pod "cde6a56e-2c25-437d-94ab-8838c987c3bb" (UID: "cde6a56e-2c25-437d-94ab-8838c987c3bb"). InnerVolumeSpecName "kube-api-access-c9fkm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:59:59 crc kubenswrapper[4833]: I0217 13:59:59.498163 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cde6a56e-2c25-437d-94ab-8838c987c3bb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cde6a56e-2c25-437d-94ab-8838c987c3bb" (UID: "cde6a56e-2c25-437d-94ab-8838c987c3bb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:59:59 crc kubenswrapper[4833]: I0217 13:59:59.547459 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9fkm\" (UniqueName: \"kubernetes.io/projected/cde6a56e-2c25-437d-94ab-8838c987c3bb-kube-api-access-c9fkm\") on node \"crc\" DevicePath \"\"" Feb 17 13:59:59 crc kubenswrapper[4833]: I0217 13:59:59.547488 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cde6a56e-2c25-437d-94ab-8838c987c3bb-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 13:59:59 crc kubenswrapper[4833]: I0217 13:59:59.547498 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cde6a56e-2c25-437d-94ab-8838c987c3bb-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 14:00:00 crc kubenswrapper[4833]: I0217 14:00:00.099561 4833 generic.go:334] "Generic (PLEG): container finished" podID="cde6a56e-2c25-437d-94ab-8838c987c3bb" containerID="dae51c42b18f488dbb40634c83346231ddd4de52abae42c2ae21e42aba638d7a" exitCode=0 Feb 17 14:00:00 crc kubenswrapper[4833]: I0217 14:00:00.099618 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2w92" event={"ID":"cde6a56e-2c25-437d-94ab-8838c987c3bb","Type":"ContainerDied","Data":"dae51c42b18f488dbb40634c83346231ddd4de52abae42c2ae21e42aba638d7a"} Feb 17 14:00:00 crc kubenswrapper[4833]: I0217 14:00:00.100335 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2w92" event={"ID":"cde6a56e-2c25-437d-94ab-8838c987c3bb","Type":"ContainerDied","Data":"f1cf64ad1f05ee811946bbd603cb0b28f4a86e8adaa7619dc827ca11c6e035f8"} Feb 17 14:00:00 crc kubenswrapper[4833]: I0217 14:00:00.100373 4833 scope.go:117] "RemoveContainer" containerID="dae51c42b18f488dbb40634c83346231ddd4de52abae42c2ae21e42aba638d7a" Feb 17 14:00:00 crc kubenswrapper[4833]: I0217 14:00:00.099634 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d2w92" Feb 17 14:00:00 crc kubenswrapper[4833]: I0217 14:00:00.134468 4833 scope.go:117] "RemoveContainer" containerID="bac0b4d27b9f6d7335705b974cd5255002653f45aa9a5529a52cb381437d21d2" Feb 17 14:00:00 crc kubenswrapper[4833]: I0217 14:00:00.150895 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d2w92"] Feb 17 14:00:00 crc kubenswrapper[4833]: I0217 14:00:00.163353 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8dmth" Feb 17 14:00:00 crc kubenswrapper[4833]: I0217 14:00:00.166676 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-d2w92"] Feb 17 14:00:00 crc kubenswrapper[4833]: I0217 14:00:00.179496 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522280-rctlb"] Feb 17 14:00:00 crc kubenswrapper[4833]: E0217 14:00:00.179841 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cde6a56e-2c25-437d-94ab-8838c987c3bb" containerName="registry-server" Feb 17 14:00:00 crc kubenswrapper[4833]: I0217 14:00:00.179864 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="cde6a56e-2c25-437d-94ab-8838c987c3bb" containerName="registry-server" Feb 17 14:00:00 crc kubenswrapper[4833]: E0217 14:00:00.179878 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34052ffc-4343-4af7-a722-f534edd3f1b6" containerName="extract-utilities" Feb 17 14:00:00 crc kubenswrapper[4833]: I0217 14:00:00.179887 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="34052ffc-4343-4af7-a722-f534edd3f1b6" containerName="extract-utilities" Feb 17 14:00:00 crc kubenswrapper[4833]: E0217 14:00:00.179900 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34052ffc-4343-4af7-a722-f534edd3f1b6" containerName="extract-content" Feb 17 14:00:00 crc kubenswrapper[4833]: I0217 14:00:00.179908 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="34052ffc-4343-4af7-a722-f534edd3f1b6" containerName="extract-content" Feb 17 14:00:00 crc kubenswrapper[4833]: E0217 14:00:00.179928 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cde6a56e-2c25-437d-94ab-8838c987c3bb" containerName="extract-content" Feb 17 14:00:00 crc kubenswrapper[4833]: I0217 14:00:00.179936 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="cde6a56e-2c25-437d-94ab-8838c987c3bb" containerName="extract-content" Feb 17 14:00:00 crc kubenswrapper[4833]: E0217 14:00:00.179949 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cde6a56e-2c25-437d-94ab-8838c987c3bb" containerName="extract-utilities" Feb 17 14:00:00 crc kubenswrapper[4833]: I0217 14:00:00.179956 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="cde6a56e-2c25-437d-94ab-8838c987c3bb" containerName="extract-utilities" Feb 17 14:00:00 crc kubenswrapper[4833]: E0217 14:00:00.179968 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34052ffc-4343-4af7-a722-f534edd3f1b6" containerName="registry-server" Feb 17 14:00:00 crc kubenswrapper[4833]: I0217 14:00:00.179975 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="34052ffc-4343-4af7-a722-f534edd3f1b6" containerName="registry-server" Feb 17 14:00:00 crc kubenswrapper[4833]: I0217 14:00:00.180137 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="cde6a56e-2c25-437d-94ab-8838c987c3bb" containerName="registry-server" Feb 17 14:00:00 crc kubenswrapper[4833]: I0217 14:00:00.180149 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="34052ffc-4343-4af7-a722-f534edd3f1b6" containerName="registry-server" Feb 17 14:00:00 crc kubenswrapper[4833]: I0217 14:00:00.180772 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522280-rctlb" Feb 17 14:00:00 crc kubenswrapper[4833]: I0217 14:00:00.184234 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 14:00:00 crc kubenswrapper[4833]: I0217 14:00:00.184632 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 14:00:00 crc kubenswrapper[4833]: I0217 14:00:00.188244 4833 scope.go:117] "RemoveContainer" containerID="99e88f1b87d48f9400a479d48606a9d785bbddcb81dc4dbcf4a3839395a734fe" Feb 17 14:00:00 crc kubenswrapper[4833]: I0217 14:00:00.190985 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522280-rctlb"] Feb 17 14:00:00 crc kubenswrapper[4833]: I0217 14:00:00.245477 4833 scope.go:117] "RemoveContainer" containerID="dae51c42b18f488dbb40634c83346231ddd4de52abae42c2ae21e42aba638d7a" Feb 17 14:00:00 crc kubenswrapper[4833]: E0217 14:00:00.246489 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dae51c42b18f488dbb40634c83346231ddd4de52abae42c2ae21e42aba638d7a\": container with ID starting with dae51c42b18f488dbb40634c83346231ddd4de52abae42c2ae21e42aba638d7a not found: ID does not exist" containerID="dae51c42b18f488dbb40634c83346231ddd4de52abae42c2ae21e42aba638d7a" Feb 17 14:00:00 crc kubenswrapper[4833]: I0217 14:00:00.246545 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dae51c42b18f488dbb40634c83346231ddd4de52abae42c2ae21e42aba638d7a"} err="failed to get container status \"dae51c42b18f488dbb40634c83346231ddd4de52abae42c2ae21e42aba638d7a\": rpc error: code = NotFound desc = could not find container \"dae51c42b18f488dbb40634c83346231ddd4de52abae42c2ae21e42aba638d7a\": container with ID starting with dae51c42b18f488dbb40634c83346231ddd4de52abae42c2ae21e42aba638d7a not found: ID does not exist" Feb 17 14:00:00 crc kubenswrapper[4833]: I0217 14:00:00.246584 4833 scope.go:117] "RemoveContainer" containerID="bac0b4d27b9f6d7335705b974cd5255002653f45aa9a5529a52cb381437d21d2" Feb 17 14:00:00 crc kubenswrapper[4833]: E0217 14:00:00.246946 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bac0b4d27b9f6d7335705b974cd5255002653f45aa9a5529a52cb381437d21d2\": container with ID starting with bac0b4d27b9f6d7335705b974cd5255002653f45aa9a5529a52cb381437d21d2 not found: ID does not exist" containerID="bac0b4d27b9f6d7335705b974cd5255002653f45aa9a5529a52cb381437d21d2" Feb 17 14:00:00 crc kubenswrapper[4833]: I0217 14:00:00.246984 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bac0b4d27b9f6d7335705b974cd5255002653f45aa9a5529a52cb381437d21d2"} err="failed to get container status \"bac0b4d27b9f6d7335705b974cd5255002653f45aa9a5529a52cb381437d21d2\": rpc error: code = NotFound desc = could not find container \"bac0b4d27b9f6d7335705b974cd5255002653f45aa9a5529a52cb381437d21d2\": container with ID starting with bac0b4d27b9f6d7335705b974cd5255002653f45aa9a5529a52cb381437d21d2 not found: ID does not exist" Feb 17 14:00:00 crc kubenswrapper[4833]: I0217 14:00:00.247010 4833 scope.go:117] "RemoveContainer" containerID="99e88f1b87d48f9400a479d48606a9d785bbddcb81dc4dbcf4a3839395a734fe" Feb 17 14:00:00 crc kubenswrapper[4833]: E0217 14:00:00.247346 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99e88f1b87d48f9400a479d48606a9d785bbddcb81dc4dbcf4a3839395a734fe\": container with ID starting with 99e88f1b87d48f9400a479d48606a9d785bbddcb81dc4dbcf4a3839395a734fe not found: ID does not exist" containerID="99e88f1b87d48f9400a479d48606a9d785bbddcb81dc4dbcf4a3839395a734fe" Feb 17 14:00:00 crc kubenswrapper[4833]: I0217 14:00:00.247377 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99e88f1b87d48f9400a479d48606a9d785bbddcb81dc4dbcf4a3839395a734fe"} err="failed to get container status \"99e88f1b87d48f9400a479d48606a9d785bbddcb81dc4dbcf4a3839395a734fe\": rpc error: code = NotFound desc = could not find container \"99e88f1b87d48f9400a479d48606a9d785bbddcb81dc4dbcf4a3839395a734fe\": container with ID starting with 99e88f1b87d48f9400a479d48606a9d785bbddcb81dc4dbcf4a3839395a734fe not found: ID does not exist" Feb 17 14:00:00 crc kubenswrapper[4833]: I0217 14:00:00.359305 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/08511c57-c43c-4bf8-9133-7c910a4382dc-config-volume\") pod \"collect-profiles-29522280-rctlb\" (UID: \"08511c57-c43c-4bf8-9133-7c910a4382dc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522280-rctlb" Feb 17 14:00:00 crc kubenswrapper[4833]: I0217 14:00:00.359365 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8wxq\" (UniqueName: \"kubernetes.io/projected/08511c57-c43c-4bf8-9133-7c910a4382dc-kube-api-access-c8wxq\") pod \"collect-profiles-29522280-rctlb\" (UID: \"08511c57-c43c-4bf8-9133-7c910a4382dc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522280-rctlb" Feb 17 14:00:00 crc kubenswrapper[4833]: I0217 14:00:00.359479 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/08511c57-c43c-4bf8-9133-7c910a4382dc-secret-volume\") pod \"collect-profiles-29522280-rctlb\" (UID: \"08511c57-c43c-4bf8-9133-7c910a4382dc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522280-rctlb" Feb 17 14:00:00 crc kubenswrapper[4833]: I0217 14:00:00.460789 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/08511c57-c43c-4bf8-9133-7c910a4382dc-secret-volume\") pod \"collect-profiles-29522280-rctlb\" (UID: \"08511c57-c43c-4bf8-9133-7c910a4382dc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522280-rctlb" Feb 17 14:00:00 crc kubenswrapper[4833]: I0217 14:00:00.460881 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/08511c57-c43c-4bf8-9133-7c910a4382dc-config-volume\") pod \"collect-profiles-29522280-rctlb\" (UID: \"08511c57-c43c-4bf8-9133-7c910a4382dc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522280-rctlb" Feb 17 14:00:00 crc kubenswrapper[4833]: I0217 14:00:00.460915 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8wxq\" (UniqueName: \"kubernetes.io/projected/08511c57-c43c-4bf8-9133-7c910a4382dc-kube-api-access-c8wxq\") pod \"collect-profiles-29522280-rctlb\" (UID: \"08511c57-c43c-4bf8-9133-7c910a4382dc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522280-rctlb" Feb 17 14:00:00 crc kubenswrapper[4833]: I0217 14:00:00.461654 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/08511c57-c43c-4bf8-9133-7c910a4382dc-config-volume\") pod \"collect-profiles-29522280-rctlb\" (UID: \"08511c57-c43c-4bf8-9133-7c910a4382dc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522280-rctlb" Feb 17 14:00:00 crc kubenswrapper[4833]: I0217 14:00:00.464355 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/08511c57-c43c-4bf8-9133-7c910a4382dc-secret-volume\") pod \"collect-profiles-29522280-rctlb\" (UID: \"08511c57-c43c-4bf8-9133-7c910a4382dc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522280-rctlb" Feb 17 14:00:00 crc kubenswrapper[4833]: I0217 14:00:00.482895 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8wxq\" (UniqueName: \"kubernetes.io/projected/08511c57-c43c-4bf8-9133-7c910a4382dc-kube-api-access-c8wxq\") pod \"collect-profiles-29522280-rctlb\" (UID: \"08511c57-c43c-4bf8-9133-7c910a4382dc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522280-rctlb" Feb 17 14:00:00 crc kubenswrapper[4833]: I0217 14:00:00.568351 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522280-rctlb" Feb 17 14:00:00 crc kubenswrapper[4833]: I0217 14:00:00.965099 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522280-rctlb"] Feb 17 14:00:00 crc kubenswrapper[4833]: W0217 14:00:00.971652 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08511c57_c43c_4bf8_9133_7c910a4382dc.slice/crio-80c36779445cc017bc514c5dd19879d057bb797a073215ebd7a2249160422b0e WatchSource:0}: Error finding container 80c36779445cc017bc514c5dd19879d057bb797a073215ebd7a2249160422b0e: Status 404 returned error can't find the container with id 80c36779445cc017bc514c5dd19879d057bb797a073215ebd7a2249160422b0e Feb 17 14:00:01 crc kubenswrapper[4833]: I0217 14:00:01.049290 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cde6a56e-2c25-437d-94ab-8838c987c3bb" path="/var/lib/kubelet/pods/cde6a56e-2c25-437d-94ab-8838c987c3bb/volumes" Feb 17 14:00:01 crc kubenswrapper[4833]: I0217 14:00:01.109680 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522280-rctlb" event={"ID":"08511c57-c43c-4bf8-9133-7c910a4382dc","Type":"ContainerStarted","Data":"372d277762900ee080a8b9ca96e6c12791c177af5006be3e14beed67ff5ce8f6"} Feb 17 14:00:01 crc kubenswrapper[4833]: I0217 14:00:01.109722 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522280-rctlb" event={"ID":"08511c57-c43c-4bf8-9133-7c910a4382dc","Type":"ContainerStarted","Data":"80c36779445cc017bc514c5dd19879d057bb797a073215ebd7a2249160422b0e"} Feb 17 14:00:01 crc kubenswrapper[4833]: I0217 14:00:01.123089 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29522280-rctlb" podStartSLOduration=1.123069863 podStartE2EDuration="1.123069863s" podCreationTimestamp="2026-02-17 14:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:00:01.12226471 +0000 UTC m=+890.757364163" watchObservedRunningTime="2026-02-17 14:00:01.123069863 +0000 UTC m=+890.758169316" Feb 17 14:00:02 crc kubenswrapper[4833]: I0217 14:00:02.117724 4833 generic.go:334] "Generic (PLEG): container finished" podID="08511c57-c43c-4bf8-9133-7c910a4382dc" containerID="372d277762900ee080a8b9ca96e6c12791c177af5006be3e14beed67ff5ce8f6" exitCode=0 Feb 17 14:00:02 crc kubenswrapper[4833]: I0217 14:00:02.117782 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522280-rctlb" event={"ID":"08511c57-c43c-4bf8-9133-7c910a4382dc","Type":"ContainerDied","Data":"372d277762900ee080a8b9ca96e6c12791c177af5006be3e14beed67ff5ce8f6"} Feb 17 14:00:03 crc kubenswrapper[4833]: I0217 14:00:03.347670 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522280-rctlb" Feb 17 14:00:03 crc kubenswrapper[4833]: I0217 14:00:03.500747 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/08511c57-c43c-4bf8-9133-7c910a4382dc-config-volume\") pod \"08511c57-c43c-4bf8-9133-7c910a4382dc\" (UID: \"08511c57-c43c-4bf8-9133-7c910a4382dc\") " Feb 17 14:00:03 crc kubenswrapper[4833]: I0217 14:00:03.500805 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8wxq\" (UniqueName: \"kubernetes.io/projected/08511c57-c43c-4bf8-9133-7c910a4382dc-kube-api-access-c8wxq\") pod \"08511c57-c43c-4bf8-9133-7c910a4382dc\" (UID: \"08511c57-c43c-4bf8-9133-7c910a4382dc\") " Feb 17 14:00:03 crc kubenswrapper[4833]: I0217 14:00:03.500852 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/08511c57-c43c-4bf8-9133-7c910a4382dc-secret-volume\") pod \"08511c57-c43c-4bf8-9133-7c910a4382dc\" (UID: \"08511c57-c43c-4bf8-9133-7c910a4382dc\") " Feb 17 14:00:03 crc kubenswrapper[4833]: I0217 14:00:03.501701 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08511c57-c43c-4bf8-9133-7c910a4382dc-config-volume" (OuterVolumeSpecName: "config-volume") pod "08511c57-c43c-4bf8-9133-7c910a4382dc" (UID: "08511c57-c43c-4bf8-9133-7c910a4382dc"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:00:03 crc kubenswrapper[4833]: I0217 14:00:03.506383 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08511c57-c43c-4bf8-9133-7c910a4382dc-kube-api-access-c8wxq" (OuterVolumeSpecName: "kube-api-access-c8wxq") pod "08511c57-c43c-4bf8-9133-7c910a4382dc" (UID: "08511c57-c43c-4bf8-9133-7c910a4382dc"). InnerVolumeSpecName "kube-api-access-c8wxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:00:03 crc kubenswrapper[4833]: I0217 14:00:03.506393 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08511c57-c43c-4bf8-9133-7c910a4382dc-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "08511c57-c43c-4bf8-9133-7c910a4382dc" (UID: "08511c57-c43c-4bf8-9133-7c910a4382dc"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:00:03 crc kubenswrapper[4833]: I0217 14:00:03.602188 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8wxq\" (UniqueName: \"kubernetes.io/projected/08511c57-c43c-4bf8-9133-7c910a4382dc-kube-api-access-c8wxq\") on node \"crc\" DevicePath \"\"" Feb 17 14:00:03 crc kubenswrapper[4833]: I0217 14:00:03.602224 4833 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/08511c57-c43c-4bf8-9133-7c910a4382dc-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 14:00:03 crc kubenswrapper[4833]: I0217 14:00:03.602234 4833 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/08511c57-c43c-4bf8-9133-7c910a4382dc-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 14:00:03 crc kubenswrapper[4833]: I0217 14:00:03.732992 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-54l4d" Feb 17 14:00:03 crc kubenswrapper[4833]: I0217 14:00:03.733049 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-54l4d" Feb 17 14:00:03 crc kubenswrapper[4833]: I0217 14:00:03.772863 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-54l4d" Feb 17 14:00:04 crc kubenswrapper[4833]: I0217 14:00:04.134650 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522280-rctlb" event={"ID":"08511c57-c43c-4bf8-9133-7c910a4382dc","Type":"ContainerDied","Data":"80c36779445cc017bc514c5dd19879d057bb797a073215ebd7a2249160422b0e"} Feb 17 14:00:04 crc kubenswrapper[4833]: I0217 14:00:04.134713 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80c36779445cc017bc514c5dd19879d057bb797a073215ebd7a2249160422b0e" Feb 17 14:00:04 crc kubenswrapper[4833]: I0217 14:00:04.134722 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522280-rctlb" Feb 17 14:00:04 crc kubenswrapper[4833]: I0217 14:00:04.178410 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-54l4d" Feb 17 14:00:05 crc kubenswrapper[4833]: I0217 14:00:05.847855 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/76efda0fbc8a5f96c6c91adb000f6e534407faa1f502e379d8598e96desmktq"] Feb 17 14:00:05 crc kubenswrapper[4833]: E0217 14:00:05.848713 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08511c57-c43c-4bf8-9133-7c910a4382dc" containerName="collect-profiles" Feb 17 14:00:05 crc kubenswrapper[4833]: I0217 14:00:05.848736 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="08511c57-c43c-4bf8-9133-7c910a4382dc" containerName="collect-profiles" Feb 17 14:00:05 crc kubenswrapper[4833]: I0217 14:00:05.850083 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="08511c57-c43c-4bf8-9133-7c910a4382dc" containerName="collect-profiles" Feb 17 14:00:05 crc kubenswrapper[4833]: I0217 14:00:05.854327 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/76efda0fbc8a5f96c6c91adb000f6e534407faa1f502e379d8598e96desmktq" Feb 17 14:00:05 crc kubenswrapper[4833]: I0217 14:00:05.857663 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-zhldv" Feb 17 14:00:05 crc kubenswrapper[4833]: I0217 14:00:05.864681 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/76efda0fbc8a5f96c6c91adb000f6e534407faa1f502e379d8598e96desmktq"] Feb 17 14:00:06 crc kubenswrapper[4833]: I0217 14:00:06.042805 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fea10cca-4838-469c-9988-4f1c15e2d66d-bundle\") pod \"76efda0fbc8a5f96c6c91adb000f6e534407faa1f502e379d8598e96desmktq\" (UID: \"fea10cca-4838-469c-9988-4f1c15e2d66d\") " pod="openstack-operators/76efda0fbc8a5f96c6c91adb000f6e534407faa1f502e379d8598e96desmktq" Feb 17 14:00:06 crc kubenswrapper[4833]: I0217 14:00:06.042860 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8qt8\" (UniqueName: \"kubernetes.io/projected/fea10cca-4838-469c-9988-4f1c15e2d66d-kube-api-access-r8qt8\") pod \"76efda0fbc8a5f96c6c91adb000f6e534407faa1f502e379d8598e96desmktq\" (UID: \"fea10cca-4838-469c-9988-4f1c15e2d66d\") " pod="openstack-operators/76efda0fbc8a5f96c6c91adb000f6e534407faa1f502e379d8598e96desmktq" Feb 17 14:00:06 crc kubenswrapper[4833]: I0217 14:00:06.042899 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fea10cca-4838-469c-9988-4f1c15e2d66d-util\") pod \"76efda0fbc8a5f96c6c91adb000f6e534407faa1f502e379d8598e96desmktq\" (UID: \"fea10cca-4838-469c-9988-4f1c15e2d66d\") " pod="openstack-operators/76efda0fbc8a5f96c6c91adb000f6e534407faa1f502e379d8598e96desmktq" Feb 17 14:00:06 crc kubenswrapper[4833]: I0217 14:00:06.143877 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fea10cca-4838-469c-9988-4f1c15e2d66d-bundle\") pod \"76efda0fbc8a5f96c6c91adb000f6e534407faa1f502e379d8598e96desmktq\" (UID: \"fea10cca-4838-469c-9988-4f1c15e2d66d\") " pod="openstack-operators/76efda0fbc8a5f96c6c91adb000f6e534407faa1f502e379d8598e96desmktq" Feb 17 14:00:06 crc kubenswrapper[4833]: I0217 14:00:06.143925 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8qt8\" (UniqueName: \"kubernetes.io/projected/fea10cca-4838-469c-9988-4f1c15e2d66d-kube-api-access-r8qt8\") pod \"76efda0fbc8a5f96c6c91adb000f6e534407faa1f502e379d8598e96desmktq\" (UID: \"fea10cca-4838-469c-9988-4f1c15e2d66d\") " pod="openstack-operators/76efda0fbc8a5f96c6c91adb000f6e534407faa1f502e379d8598e96desmktq" Feb 17 14:00:06 crc kubenswrapper[4833]: I0217 14:00:06.143962 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fea10cca-4838-469c-9988-4f1c15e2d66d-util\") pod \"76efda0fbc8a5f96c6c91adb000f6e534407faa1f502e379d8598e96desmktq\" (UID: \"fea10cca-4838-469c-9988-4f1c15e2d66d\") " pod="openstack-operators/76efda0fbc8a5f96c6c91adb000f6e534407faa1f502e379d8598e96desmktq" Feb 17 14:00:06 crc kubenswrapper[4833]: I0217 14:00:06.144816 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fea10cca-4838-469c-9988-4f1c15e2d66d-bundle\") pod \"76efda0fbc8a5f96c6c91adb000f6e534407faa1f502e379d8598e96desmktq\" (UID: \"fea10cca-4838-469c-9988-4f1c15e2d66d\") " pod="openstack-operators/76efda0fbc8a5f96c6c91adb000f6e534407faa1f502e379d8598e96desmktq" Feb 17 14:00:06 crc kubenswrapper[4833]: I0217 14:00:06.144870 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fea10cca-4838-469c-9988-4f1c15e2d66d-util\") pod \"76efda0fbc8a5f96c6c91adb000f6e534407faa1f502e379d8598e96desmktq\" (UID: \"fea10cca-4838-469c-9988-4f1c15e2d66d\") " pod="openstack-operators/76efda0fbc8a5f96c6c91adb000f6e534407faa1f502e379d8598e96desmktq" Feb 17 14:00:06 crc kubenswrapper[4833]: I0217 14:00:06.176261 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8qt8\" (UniqueName: \"kubernetes.io/projected/fea10cca-4838-469c-9988-4f1c15e2d66d-kube-api-access-r8qt8\") pod \"76efda0fbc8a5f96c6c91adb000f6e534407faa1f502e379d8598e96desmktq\" (UID: \"fea10cca-4838-469c-9988-4f1c15e2d66d\") " pod="openstack-operators/76efda0fbc8a5f96c6c91adb000f6e534407faa1f502e379d8598e96desmktq" Feb 17 14:00:06 crc kubenswrapper[4833]: I0217 14:00:06.178914 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/76efda0fbc8a5f96c6c91adb000f6e534407faa1f502e379d8598e96desmktq" Feb 17 14:00:06 crc kubenswrapper[4833]: I0217 14:00:06.220561 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8dmth"] Feb 17 14:00:06 crc kubenswrapper[4833]: I0217 14:00:06.220953 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8dmth" podUID="1f1ff29e-a95d-4cf6-9d33-5378e2b960c8" containerName="registry-server" containerID="cri-o://ad3c45dd7a4e38fccc643c3a4dd76bb2dc2e1401d9e5d278118d5df95e6df484" gracePeriod=2 Feb 17 14:00:06 crc kubenswrapper[4833]: I0217 14:00:06.581423 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8dmth" Feb 17 14:00:06 crc kubenswrapper[4833]: I0217 14:00:06.636729 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/76efda0fbc8a5f96c6c91adb000f6e534407faa1f502e379d8598e96desmktq"] Feb 17 14:00:06 crc kubenswrapper[4833]: W0217 14:00:06.639884 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfea10cca_4838_469c_9988_4f1c15e2d66d.slice/crio-ebdb180adfa8ae70eaae010ad8c3b167054cd00876be6feb85725c212118fcd0 WatchSource:0}: Error finding container ebdb180adfa8ae70eaae010ad8c3b167054cd00876be6feb85725c212118fcd0: Status 404 returned error can't find the container with id ebdb180adfa8ae70eaae010ad8c3b167054cd00876be6feb85725c212118fcd0 Feb 17 14:00:06 crc kubenswrapper[4833]: I0217 14:00:06.654701 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grjl8\" (UniqueName: \"kubernetes.io/projected/1f1ff29e-a95d-4cf6-9d33-5378e2b960c8-kube-api-access-grjl8\") pod \"1f1ff29e-a95d-4cf6-9d33-5378e2b960c8\" (UID: \"1f1ff29e-a95d-4cf6-9d33-5378e2b960c8\") " Feb 17 14:00:06 crc kubenswrapper[4833]: I0217 14:00:06.654799 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f1ff29e-a95d-4cf6-9d33-5378e2b960c8-utilities\") pod \"1f1ff29e-a95d-4cf6-9d33-5378e2b960c8\" (UID: \"1f1ff29e-a95d-4cf6-9d33-5378e2b960c8\") " Feb 17 14:00:06 crc kubenswrapper[4833]: I0217 14:00:06.654954 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f1ff29e-a95d-4cf6-9d33-5378e2b960c8-catalog-content\") pod \"1f1ff29e-a95d-4cf6-9d33-5378e2b960c8\" (UID: \"1f1ff29e-a95d-4cf6-9d33-5378e2b960c8\") " Feb 17 14:00:06 crc kubenswrapper[4833]: I0217 14:00:06.655621 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f1ff29e-a95d-4cf6-9d33-5378e2b960c8-utilities" (OuterVolumeSpecName: "utilities") pod "1f1ff29e-a95d-4cf6-9d33-5378e2b960c8" (UID: "1f1ff29e-a95d-4cf6-9d33-5378e2b960c8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:00:06 crc kubenswrapper[4833]: I0217 14:00:06.659728 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f1ff29e-a95d-4cf6-9d33-5378e2b960c8-kube-api-access-grjl8" (OuterVolumeSpecName: "kube-api-access-grjl8") pod "1f1ff29e-a95d-4cf6-9d33-5378e2b960c8" (UID: "1f1ff29e-a95d-4cf6-9d33-5378e2b960c8"). InnerVolumeSpecName "kube-api-access-grjl8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:00:06 crc kubenswrapper[4833]: I0217 14:00:06.695287 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f1ff29e-a95d-4cf6-9d33-5378e2b960c8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1f1ff29e-a95d-4cf6-9d33-5378e2b960c8" (UID: "1f1ff29e-a95d-4cf6-9d33-5378e2b960c8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:00:06 crc kubenswrapper[4833]: I0217 14:00:06.756805 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f1ff29e-a95d-4cf6-9d33-5378e2b960c8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 14:00:06 crc kubenswrapper[4833]: I0217 14:00:06.756838 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grjl8\" (UniqueName: \"kubernetes.io/projected/1f1ff29e-a95d-4cf6-9d33-5378e2b960c8-kube-api-access-grjl8\") on node \"crc\" DevicePath \"\"" Feb 17 14:00:06 crc kubenswrapper[4833]: I0217 14:00:06.756848 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f1ff29e-a95d-4cf6-9d33-5378e2b960c8-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 14:00:07 crc kubenswrapper[4833]: I0217 14:00:07.157302 4833 generic.go:334] "Generic (PLEG): container finished" podID="fea10cca-4838-469c-9988-4f1c15e2d66d" containerID="123f664085dd73de02dc5a52fbaecaab36b379c67a4d85b20409623aa6d6f557" exitCode=0 Feb 17 14:00:07 crc kubenswrapper[4833]: I0217 14:00:07.157360 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/76efda0fbc8a5f96c6c91adb000f6e534407faa1f502e379d8598e96desmktq" event={"ID":"fea10cca-4838-469c-9988-4f1c15e2d66d","Type":"ContainerDied","Data":"123f664085dd73de02dc5a52fbaecaab36b379c67a4d85b20409623aa6d6f557"} Feb 17 14:00:07 crc kubenswrapper[4833]: I0217 14:00:07.157671 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/76efda0fbc8a5f96c6c91adb000f6e534407faa1f502e379d8598e96desmktq" event={"ID":"fea10cca-4838-469c-9988-4f1c15e2d66d","Type":"ContainerStarted","Data":"ebdb180adfa8ae70eaae010ad8c3b167054cd00876be6feb85725c212118fcd0"} Feb 17 14:00:07 crc kubenswrapper[4833]: I0217 14:00:07.160498 4833 generic.go:334] "Generic (PLEG): container finished" podID="1f1ff29e-a95d-4cf6-9d33-5378e2b960c8" containerID="ad3c45dd7a4e38fccc643c3a4dd76bb2dc2e1401d9e5d278118d5df95e6df484" exitCode=0 Feb 17 14:00:07 crc kubenswrapper[4833]: I0217 14:00:07.160542 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8dmth" event={"ID":"1f1ff29e-a95d-4cf6-9d33-5378e2b960c8","Type":"ContainerDied","Data":"ad3c45dd7a4e38fccc643c3a4dd76bb2dc2e1401d9e5d278118d5df95e6df484"} Feb 17 14:00:07 crc kubenswrapper[4833]: I0217 14:00:07.160566 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8dmth" Feb 17 14:00:07 crc kubenswrapper[4833]: I0217 14:00:07.160582 4833 scope.go:117] "RemoveContainer" containerID="ad3c45dd7a4e38fccc643c3a4dd76bb2dc2e1401d9e5d278118d5df95e6df484" Feb 17 14:00:07 crc kubenswrapper[4833]: I0217 14:00:07.160570 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8dmth" event={"ID":"1f1ff29e-a95d-4cf6-9d33-5378e2b960c8","Type":"ContainerDied","Data":"d0e08179625ffb19a900d058416438f451a77653a2166578e54f807cb39df66c"} Feb 17 14:00:07 crc kubenswrapper[4833]: I0217 14:00:07.176341 4833 scope.go:117] "RemoveContainer" containerID="47383ae067a56ae65bab4d0a346570f90efcda57ff3f32f4a96907fddae7077c" Feb 17 14:00:07 crc kubenswrapper[4833]: I0217 14:00:07.188120 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8dmth"] Feb 17 14:00:07 crc kubenswrapper[4833]: I0217 14:00:07.200577 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8dmth"] Feb 17 14:00:07 crc kubenswrapper[4833]: I0217 14:00:07.208543 4833 scope.go:117] "RemoveContainer" containerID="1333efa37c5a43d26914fe63a5a9e0637e74178a996cedf03505a515f6211437" Feb 17 14:00:07 crc kubenswrapper[4833]: I0217 14:00:07.228023 4833 scope.go:117] "RemoveContainer" containerID="ad3c45dd7a4e38fccc643c3a4dd76bb2dc2e1401d9e5d278118d5df95e6df484" Feb 17 14:00:07 crc kubenswrapper[4833]: E0217 14:00:07.228465 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad3c45dd7a4e38fccc643c3a4dd76bb2dc2e1401d9e5d278118d5df95e6df484\": container with ID starting with ad3c45dd7a4e38fccc643c3a4dd76bb2dc2e1401d9e5d278118d5df95e6df484 not found: ID does not exist" containerID="ad3c45dd7a4e38fccc643c3a4dd76bb2dc2e1401d9e5d278118d5df95e6df484" Feb 17 14:00:07 crc kubenswrapper[4833]: I0217 14:00:07.228497 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad3c45dd7a4e38fccc643c3a4dd76bb2dc2e1401d9e5d278118d5df95e6df484"} err="failed to get container status \"ad3c45dd7a4e38fccc643c3a4dd76bb2dc2e1401d9e5d278118d5df95e6df484\": rpc error: code = NotFound desc = could not find container \"ad3c45dd7a4e38fccc643c3a4dd76bb2dc2e1401d9e5d278118d5df95e6df484\": container with ID starting with ad3c45dd7a4e38fccc643c3a4dd76bb2dc2e1401d9e5d278118d5df95e6df484 not found: ID does not exist" Feb 17 14:00:07 crc kubenswrapper[4833]: I0217 14:00:07.228520 4833 scope.go:117] "RemoveContainer" containerID="47383ae067a56ae65bab4d0a346570f90efcda57ff3f32f4a96907fddae7077c" Feb 17 14:00:07 crc kubenswrapper[4833]: E0217 14:00:07.228996 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47383ae067a56ae65bab4d0a346570f90efcda57ff3f32f4a96907fddae7077c\": container with ID starting with 47383ae067a56ae65bab4d0a346570f90efcda57ff3f32f4a96907fddae7077c not found: ID does not exist" containerID="47383ae067a56ae65bab4d0a346570f90efcda57ff3f32f4a96907fddae7077c" Feb 17 14:00:07 crc kubenswrapper[4833]: I0217 14:00:07.229022 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47383ae067a56ae65bab4d0a346570f90efcda57ff3f32f4a96907fddae7077c"} err="failed to get container status \"47383ae067a56ae65bab4d0a346570f90efcda57ff3f32f4a96907fddae7077c\": rpc error: code = NotFound desc = could not find container \"47383ae067a56ae65bab4d0a346570f90efcda57ff3f32f4a96907fddae7077c\": container with ID starting with 47383ae067a56ae65bab4d0a346570f90efcda57ff3f32f4a96907fddae7077c not found: ID does not exist" Feb 17 14:00:07 crc kubenswrapper[4833]: I0217 14:00:07.229084 4833 scope.go:117] "RemoveContainer" containerID="1333efa37c5a43d26914fe63a5a9e0637e74178a996cedf03505a515f6211437" Feb 17 14:00:07 crc kubenswrapper[4833]: E0217 14:00:07.229344 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1333efa37c5a43d26914fe63a5a9e0637e74178a996cedf03505a515f6211437\": container with ID starting with 1333efa37c5a43d26914fe63a5a9e0637e74178a996cedf03505a515f6211437 not found: ID does not exist" containerID="1333efa37c5a43d26914fe63a5a9e0637e74178a996cedf03505a515f6211437" Feb 17 14:00:07 crc kubenswrapper[4833]: I0217 14:00:07.229372 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1333efa37c5a43d26914fe63a5a9e0637e74178a996cedf03505a515f6211437"} err="failed to get container status \"1333efa37c5a43d26914fe63a5a9e0637e74178a996cedf03505a515f6211437\": rpc error: code = NotFound desc = could not find container \"1333efa37c5a43d26914fe63a5a9e0637e74178a996cedf03505a515f6211437\": container with ID starting with 1333efa37c5a43d26914fe63a5a9e0637e74178a996cedf03505a515f6211437 not found: ID does not exist" Feb 17 14:00:08 crc kubenswrapper[4833]: I0217 14:00:08.171713 4833 generic.go:334] "Generic (PLEG): container finished" podID="fea10cca-4838-469c-9988-4f1c15e2d66d" containerID="7d5e239e3ee4729efe34bccb9c26f5ce0ab70278ad72fdbf915826ff034abace" exitCode=0 Feb 17 14:00:08 crc kubenswrapper[4833]: I0217 14:00:08.171757 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/76efda0fbc8a5f96c6c91adb000f6e534407faa1f502e379d8598e96desmktq" event={"ID":"fea10cca-4838-469c-9988-4f1c15e2d66d","Type":"ContainerDied","Data":"7d5e239e3ee4729efe34bccb9c26f5ce0ab70278ad72fdbf915826ff034abace"} Feb 17 14:00:09 crc kubenswrapper[4833]: I0217 14:00:09.052920 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f1ff29e-a95d-4cf6-9d33-5378e2b960c8" path="/var/lib/kubelet/pods/1f1ff29e-a95d-4cf6-9d33-5378e2b960c8/volumes" Feb 17 14:00:09 crc kubenswrapper[4833]: I0217 14:00:09.194021 4833 generic.go:334] "Generic (PLEG): container finished" podID="fea10cca-4838-469c-9988-4f1c15e2d66d" containerID="c4114ccd256e5d9452937f7828ecbc6e5fd9fbfd1d2ddc52390996231dd3fb22" exitCode=0 Feb 17 14:00:09 crc kubenswrapper[4833]: I0217 14:00:09.194823 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/76efda0fbc8a5f96c6c91adb000f6e534407faa1f502e379d8598e96desmktq" event={"ID":"fea10cca-4838-469c-9988-4f1c15e2d66d","Type":"ContainerDied","Data":"c4114ccd256e5d9452937f7828ecbc6e5fd9fbfd1d2ddc52390996231dd3fb22"} Feb 17 14:00:10 crc kubenswrapper[4833]: I0217 14:00:10.536430 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/76efda0fbc8a5f96c6c91adb000f6e534407faa1f502e379d8598e96desmktq" Feb 17 14:00:10 crc kubenswrapper[4833]: I0217 14:00:10.723730 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fea10cca-4838-469c-9988-4f1c15e2d66d-util\") pod \"fea10cca-4838-469c-9988-4f1c15e2d66d\" (UID: \"fea10cca-4838-469c-9988-4f1c15e2d66d\") " Feb 17 14:00:10 crc kubenswrapper[4833]: I0217 14:00:10.723940 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fea10cca-4838-469c-9988-4f1c15e2d66d-bundle\") pod \"fea10cca-4838-469c-9988-4f1c15e2d66d\" (UID: \"fea10cca-4838-469c-9988-4f1c15e2d66d\") " Feb 17 14:00:10 crc kubenswrapper[4833]: I0217 14:00:10.724001 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8qt8\" (UniqueName: \"kubernetes.io/projected/fea10cca-4838-469c-9988-4f1c15e2d66d-kube-api-access-r8qt8\") pod \"fea10cca-4838-469c-9988-4f1c15e2d66d\" (UID: \"fea10cca-4838-469c-9988-4f1c15e2d66d\") " Feb 17 14:00:10 crc kubenswrapper[4833]: I0217 14:00:10.724726 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fea10cca-4838-469c-9988-4f1c15e2d66d-bundle" (OuterVolumeSpecName: "bundle") pod "fea10cca-4838-469c-9988-4f1c15e2d66d" (UID: "fea10cca-4838-469c-9988-4f1c15e2d66d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:00:10 crc kubenswrapper[4833]: I0217 14:00:10.730559 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fea10cca-4838-469c-9988-4f1c15e2d66d-kube-api-access-r8qt8" (OuterVolumeSpecName: "kube-api-access-r8qt8") pod "fea10cca-4838-469c-9988-4f1c15e2d66d" (UID: "fea10cca-4838-469c-9988-4f1c15e2d66d"). InnerVolumeSpecName "kube-api-access-r8qt8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:00:10 crc kubenswrapper[4833]: I0217 14:00:10.746960 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fea10cca-4838-469c-9988-4f1c15e2d66d-util" (OuterVolumeSpecName: "util") pod "fea10cca-4838-469c-9988-4f1c15e2d66d" (UID: "fea10cca-4838-469c-9988-4f1c15e2d66d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:00:10 crc kubenswrapper[4833]: I0217 14:00:10.824983 4833 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fea10cca-4838-469c-9988-4f1c15e2d66d-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 14:00:10 crc kubenswrapper[4833]: I0217 14:00:10.825013 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8qt8\" (UniqueName: \"kubernetes.io/projected/fea10cca-4838-469c-9988-4f1c15e2d66d-kube-api-access-r8qt8\") on node \"crc\" DevicePath \"\"" Feb 17 14:00:10 crc kubenswrapper[4833]: I0217 14:00:10.825024 4833 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fea10cca-4838-469c-9988-4f1c15e2d66d-util\") on node \"crc\" DevicePath \"\"" Feb 17 14:00:11 crc kubenswrapper[4833]: I0217 14:00:11.209187 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/76efda0fbc8a5f96c6c91adb000f6e534407faa1f502e379d8598e96desmktq" event={"ID":"fea10cca-4838-469c-9988-4f1c15e2d66d","Type":"ContainerDied","Data":"ebdb180adfa8ae70eaae010ad8c3b167054cd00876be6feb85725c212118fcd0"} Feb 17 14:00:11 crc kubenswrapper[4833]: I0217 14:00:11.209490 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ebdb180adfa8ae70eaae010ad8c3b167054cd00876be6feb85725c212118fcd0" Feb 17 14:00:11 crc kubenswrapper[4833]: I0217 14:00:11.209564 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/76efda0fbc8a5f96c6c91adb000f6e534407faa1f502e379d8598e96desmktq" Feb 17 14:00:14 crc kubenswrapper[4833]: I0217 14:00:14.244109 4833 patch_prober.go:28] interesting pod/machine-config-daemon-nmzvl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 14:00:14 crc kubenswrapper[4833]: I0217 14:00:14.244407 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" podUID="f4a1ca83-1919-4f9c-82de-c849cbd50e70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 14:00:17 crc kubenswrapper[4833]: I0217 14:00:17.818824 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-d585dc784-s6xrt"] Feb 17 14:00:17 crc kubenswrapper[4833]: E0217 14:00:17.819384 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fea10cca-4838-469c-9988-4f1c15e2d66d" containerName="util" Feb 17 14:00:17 crc kubenswrapper[4833]: I0217 14:00:17.819398 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="fea10cca-4838-469c-9988-4f1c15e2d66d" containerName="util" Feb 17 14:00:17 crc kubenswrapper[4833]: E0217 14:00:17.819423 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fea10cca-4838-469c-9988-4f1c15e2d66d" containerName="extract" Feb 17 14:00:17 crc kubenswrapper[4833]: I0217 14:00:17.819431 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="fea10cca-4838-469c-9988-4f1c15e2d66d" containerName="extract" Feb 17 14:00:17 crc kubenswrapper[4833]: E0217 14:00:17.819447 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f1ff29e-a95d-4cf6-9d33-5378e2b960c8" containerName="extract-utilities" Feb 17 14:00:17 crc kubenswrapper[4833]: I0217 14:00:17.819456 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f1ff29e-a95d-4cf6-9d33-5378e2b960c8" containerName="extract-utilities" Feb 17 14:00:17 crc kubenswrapper[4833]: E0217 14:00:17.819468 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f1ff29e-a95d-4cf6-9d33-5378e2b960c8" containerName="registry-server" Feb 17 14:00:17 crc kubenswrapper[4833]: I0217 14:00:17.819478 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f1ff29e-a95d-4cf6-9d33-5378e2b960c8" containerName="registry-server" Feb 17 14:00:17 crc kubenswrapper[4833]: E0217 14:00:17.819493 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fea10cca-4838-469c-9988-4f1c15e2d66d" containerName="pull" Feb 17 14:00:17 crc kubenswrapper[4833]: I0217 14:00:17.819501 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="fea10cca-4838-469c-9988-4f1c15e2d66d" containerName="pull" Feb 17 14:00:17 crc kubenswrapper[4833]: E0217 14:00:17.819520 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f1ff29e-a95d-4cf6-9d33-5378e2b960c8" containerName="extract-content" Feb 17 14:00:17 crc kubenswrapper[4833]: I0217 14:00:17.819529 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f1ff29e-a95d-4cf6-9d33-5378e2b960c8" containerName="extract-content" Feb 17 14:00:17 crc kubenswrapper[4833]: I0217 14:00:17.819671 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="fea10cca-4838-469c-9988-4f1c15e2d66d" containerName="extract" Feb 17 14:00:17 crc kubenswrapper[4833]: I0217 14:00:17.819682 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f1ff29e-a95d-4cf6-9d33-5378e2b960c8" containerName="registry-server" Feb 17 14:00:17 crc kubenswrapper[4833]: I0217 14:00:17.820173 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-d585dc784-s6xrt" Feb 17 14:00:17 crc kubenswrapper[4833]: I0217 14:00:17.822112 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-spg9f" Feb 17 14:00:17 crc kubenswrapper[4833]: I0217 14:00:17.851199 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-d585dc784-s6xrt"] Feb 17 14:00:18 crc kubenswrapper[4833]: I0217 14:00:18.020784 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxwdp\" (UniqueName: \"kubernetes.io/projected/a63e4e29-66ee-444a-9c42-f91149922c13-kube-api-access-lxwdp\") pod \"openstack-operator-controller-init-d585dc784-s6xrt\" (UID: \"a63e4e29-66ee-444a-9c42-f91149922c13\") " pod="openstack-operators/openstack-operator-controller-init-d585dc784-s6xrt" Feb 17 14:00:18 crc kubenswrapper[4833]: I0217 14:00:18.122337 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxwdp\" (UniqueName: \"kubernetes.io/projected/a63e4e29-66ee-444a-9c42-f91149922c13-kube-api-access-lxwdp\") pod \"openstack-operator-controller-init-d585dc784-s6xrt\" (UID: \"a63e4e29-66ee-444a-9c42-f91149922c13\") " pod="openstack-operators/openstack-operator-controller-init-d585dc784-s6xrt" Feb 17 14:00:18 crc kubenswrapper[4833]: I0217 14:00:18.143865 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxwdp\" (UniqueName: \"kubernetes.io/projected/a63e4e29-66ee-444a-9c42-f91149922c13-kube-api-access-lxwdp\") pod \"openstack-operator-controller-init-d585dc784-s6xrt\" (UID: \"a63e4e29-66ee-444a-9c42-f91149922c13\") " pod="openstack-operators/openstack-operator-controller-init-d585dc784-s6xrt" Feb 17 14:00:18 crc kubenswrapper[4833]: I0217 14:00:18.441121 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-d585dc784-s6xrt" Feb 17 14:00:18 crc kubenswrapper[4833]: I0217 14:00:18.862383 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-d585dc784-s6xrt"] Feb 17 14:00:18 crc kubenswrapper[4833]: W0217 14:00:18.871468 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda63e4e29_66ee_444a_9c42_f91149922c13.slice/crio-f172822b58b77a02a09f60698f199837113d698da2483ba08a4aea2f8ecdce2b WatchSource:0}: Error finding container f172822b58b77a02a09f60698f199837113d698da2483ba08a4aea2f8ecdce2b: Status 404 returned error can't find the container with id f172822b58b77a02a09f60698f199837113d698da2483ba08a4aea2f8ecdce2b Feb 17 14:00:19 crc kubenswrapper[4833]: I0217 14:00:19.276584 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-d585dc784-s6xrt" event={"ID":"a63e4e29-66ee-444a-9c42-f91149922c13","Type":"ContainerStarted","Data":"f172822b58b77a02a09f60698f199837113d698da2483ba08a4aea2f8ecdce2b"} Feb 17 14:00:23 crc kubenswrapper[4833]: I0217 14:00:23.302240 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-d585dc784-s6xrt" event={"ID":"a63e4e29-66ee-444a-9c42-f91149922c13","Type":"ContainerStarted","Data":"85b15e27b00ac0c50a7adcc677e346be4e2c45c57e625525f00c438013c8a94a"} Feb 17 14:00:23 crc kubenswrapper[4833]: I0217 14:00:23.302881 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-d585dc784-s6xrt" Feb 17 14:00:23 crc kubenswrapper[4833]: I0217 14:00:23.328130 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-d585dc784-s6xrt" podStartSLOduration=2.517671027 podStartE2EDuration="6.328116611s" podCreationTimestamp="2026-02-17 14:00:17 +0000 UTC" firstStartedPulling="2026-02-17 14:00:18.873951465 +0000 UTC m=+908.509050898" lastFinishedPulling="2026-02-17 14:00:22.684397049 +0000 UTC m=+912.319496482" observedRunningTime="2026-02-17 14:00:23.325772425 +0000 UTC m=+912.960871878" watchObservedRunningTime="2026-02-17 14:00:23.328116611 +0000 UTC m=+912.963216044" Feb 17 14:00:28 crc kubenswrapper[4833]: I0217 14:00:28.445581 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-d585dc784-s6xrt" Feb 17 14:00:44 crc kubenswrapper[4833]: I0217 14:00:44.243402 4833 patch_prober.go:28] interesting pod/machine-config-daemon-nmzvl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 14:00:44 crc kubenswrapper[4833]: I0217 14:00:44.244285 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" podUID="f4a1ca83-1919-4f9c-82de-c849cbd50e70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 14:00:44 crc kubenswrapper[4833]: I0217 14:00:44.244345 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" Feb 17 14:00:44 crc kubenswrapper[4833]: I0217 14:00:44.245147 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1a5e1a9a5d589559159e0a6c8e522d3462758f7e97a786e9713efff02185b2a7"} pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 14:00:44 crc kubenswrapper[4833]: I0217 14:00:44.245211 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" podUID="f4a1ca83-1919-4f9c-82de-c849cbd50e70" containerName="machine-config-daemon" containerID="cri-o://1a5e1a9a5d589559159e0a6c8e522d3462758f7e97a786e9713efff02185b2a7" gracePeriod=600 Feb 17 14:00:44 crc kubenswrapper[4833]: I0217 14:00:44.450151 4833 generic.go:334] "Generic (PLEG): container finished" podID="f4a1ca83-1919-4f9c-82de-c849cbd50e70" containerID="1a5e1a9a5d589559159e0a6c8e522d3462758f7e97a786e9713efff02185b2a7" exitCode=0 Feb 17 14:00:44 crc kubenswrapper[4833]: I0217 14:00:44.450201 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" event={"ID":"f4a1ca83-1919-4f9c-82de-c849cbd50e70","Type":"ContainerDied","Data":"1a5e1a9a5d589559159e0a6c8e522d3462758f7e97a786e9713efff02185b2a7"} Feb 17 14:00:44 crc kubenswrapper[4833]: I0217 14:00:44.450244 4833 scope.go:117] "RemoveContainer" containerID="2987cca443b50c5381fbff8d4447cf8801984c777d16362c800622301c56c146" Feb 17 14:00:45 crc kubenswrapper[4833]: I0217 14:00:45.458859 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" event={"ID":"f4a1ca83-1919-4f9c-82de-c849cbd50e70","Type":"ContainerStarted","Data":"1d59ec5097f9c4d0a402367427ee7a192fa778075aee8b4748e812d794a46746"} Feb 17 14:00:51 crc kubenswrapper[4833]: I0217 14:00:51.532865 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-c4b7d6946-trcbs"] Feb 17 14:00:51 crc kubenswrapper[4833]: I0217 14:00:51.534144 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-trcbs" Feb 17 14:00:51 crc kubenswrapper[4833]: I0217 14:00:51.536481 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-88tkz" Feb 17 14:00:51 crc kubenswrapper[4833]: I0217 14:00:51.543395 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-57746b5ff9-jjchm"] Feb 17 14:00:51 crc kubenswrapper[4833]: I0217 14:00:51.544427 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-jjchm" Feb 17 14:00:51 crc kubenswrapper[4833]: I0217 14:00:51.547031 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-z72gk" Feb 17 14:00:51 crc kubenswrapper[4833]: I0217 14:00:51.548227 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-c4b7d6946-trcbs"] Feb 17 14:00:51 crc kubenswrapper[4833]: I0217 14:00:51.558314 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-55cc45767f-hdxb7"] Feb 17 14:00:51 crc kubenswrapper[4833]: I0217 14:00:51.559154 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-hdxb7" Feb 17 14:00:51 crc kubenswrapper[4833]: I0217 14:00:51.562456 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-dqs84" Feb 17 14:00:51 crc kubenswrapper[4833]: I0217 14:00:51.564550 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-57746b5ff9-jjchm"] Feb 17 14:00:51 crc kubenswrapper[4833]: I0217 14:00:51.573654 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-55cc45767f-hdxb7"] Feb 17 14:00:51 crc kubenswrapper[4833]: I0217 14:00:51.609333 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-68c6d499cb-zqqxg"] Feb 17 14:00:51 crc kubenswrapper[4833]: I0217 14:00:51.610104 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-zqqxg" Feb 17 14:00:51 crc kubenswrapper[4833]: I0217 14:00:51.615529 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-n26nv" Feb 17 14:00:51 crc kubenswrapper[4833]: I0217 14:00:51.631170 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-9595d6797-798bv"] Feb 17 14:00:51 crc kubenswrapper[4833]: I0217 14:00:51.632212 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-9595d6797-798bv" Feb 17 14:00:51 crc kubenswrapper[4833]: I0217 14:00:51.638486 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-2l9kk" Feb 17 14:00:51 crc kubenswrapper[4833]: I0217 14:00:51.650387 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-68c6d499cb-zqqxg"] Feb 17 14:00:51 crc kubenswrapper[4833]: I0217 14:00:51.694636 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-66d6b5f488-87mt8"] Feb 17 14:00:51 crc kubenswrapper[4833]: I0217 14:00:51.695640 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-87mt8" Feb 17 14:00:51 crc kubenswrapper[4833]: I0217 14:00:51.700088 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 17 14:00:51 crc kubenswrapper[4833]: I0217 14:00:51.700914 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7n9m\" (UniqueName: \"kubernetes.io/projected/6504a5be-090f-4500-adaa-9ace23ba69f3-kube-api-access-h7n9m\") pod \"barbican-operator-controller-manager-c4b7d6946-trcbs\" (UID: \"6504a5be-090f-4500-adaa-9ace23ba69f3\") " pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-trcbs" Feb 17 14:00:51 crc kubenswrapper[4833]: I0217 14:00:51.700967 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkllj\" (UniqueName: \"kubernetes.io/projected/df534d1f-b343-4090-b8ad-114864ea82ec-kube-api-access-pkllj\") pod \"cinder-operator-controller-manager-57746b5ff9-jjchm\" (UID: \"df534d1f-b343-4090-b8ad-114864ea82ec\") " pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-jjchm" Feb 17 14:00:51 crc kubenswrapper[4833]: I0217 14:00:51.701000 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnq7d\" (UniqueName: \"kubernetes.io/projected/21fdadaa-0128-4daa-9dd8-2d4f7c600c99-kube-api-access-qnq7d\") pod \"designate-operator-controller-manager-55cc45767f-hdxb7\" (UID: \"21fdadaa-0128-4daa-9dd8-2d4f7c600c99\") " pod="openstack-operators/designate-operator-controller-manager-55cc45767f-hdxb7" Feb 17 14:00:51 crc kubenswrapper[4833]: I0217 14:00:51.703120 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-p4xtq" Feb 17 14:00:51 crc kubenswrapper[4833]: I0217 14:00:51.707521 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-54fb488b88-kkrrz"] Feb 17 14:00:51 crc kubenswrapper[4833]: I0217 14:00:51.708349 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-kkrrz" Feb 17 14:00:51 crc kubenswrapper[4833]: I0217 14:00:51.710454 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-lqh7s" Feb 17 14:00:51 crc kubenswrapper[4833]: I0217 14:00:51.755985 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-66d6b5f488-87mt8"] Feb 17 14:00:51 crc kubenswrapper[4833]: I0217 14:00:51.775011 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-9595d6797-798bv"] Feb 17 14:00:51 crc kubenswrapper[4833]: I0217 14:00:51.804991 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgkq5\" (UniqueName: \"kubernetes.io/projected/ed6e734c-3eb1-47d0-ad82-aa5c934da55a-kube-api-access-sgkq5\") pod \"infra-operator-controller-manager-66d6b5f488-87mt8\" (UID: \"ed6e734c-3eb1-47d0-ad82-aa5c934da55a\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-87mt8" Feb 17 14:00:51 crc kubenswrapper[4833]: I0217 14:00:51.805068 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ed6e734c-3eb1-47d0-ad82-aa5c934da55a-cert\") pod \"infra-operator-controller-manager-66d6b5f488-87mt8\" (UID: \"ed6e734c-3eb1-47d0-ad82-aa5c934da55a\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-87mt8" Feb 17 14:00:51 crc kubenswrapper[4833]: I0217 14:00:51.805114 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkltg\" (UniqueName: \"kubernetes.io/projected/d98ee2cd-c429-474d-a6a8-a7f9bc1ed559-kube-api-access-hkltg\") pod \"heat-operator-controller-manager-9595d6797-798bv\" (UID: \"d98ee2cd-c429-474d-a6a8-a7f9bc1ed559\") " pod="openstack-operators/heat-operator-controller-manager-9595d6797-798bv" Feb 17 14:00:51 crc kubenswrapper[4833]: I0217 14:00:51.805168 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7n9m\" (UniqueName: \"kubernetes.io/projected/6504a5be-090f-4500-adaa-9ace23ba69f3-kube-api-access-h7n9m\") pod \"barbican-operator-controller-manager-c4b7d6946-trcbs\" (UID: \"6504a5be-090f-4500-adaa-9ace23ba69f3\") " pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-trcbs" Feb 17 14:00:51 crc kubenswrapper[4833]: I0217 14:00:51.805251 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkllj\" (UniqueName: \"kubernetes.io/projected/df534d1f-b343-4090-b8ad-114864ea82ec-kube-api-access-pkllj\") pod \"cinder-operator-controller-manager-57746b5ff9-jjchm\" (UID: \"df534d1f-b343-4090-b8ad-114864ea82ec\") " pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-jjchm" Feb 17 14:00:51 crc kubenswrapper[4833]: I0217 14:00:51.805288 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnq7d\" (UniqueName: \"kubernetes.io/projected/21fdadaa-0128-4daa-9dd8-2d4f7c600c99-kube-api-access-qnq7d\") pod \"designate-operator-controller-manager-55cc45767f-hdxb7\" (UID: \"21fdadaa-0128-4daa-9dd8-2d4f7c600c99\") " pod="openstack-operators/designate-operator-controller-manager-55cc45767f-hdxb7" Feb 17 14:00:51 crc kubenswrapper[4833]: I0217 14:00:51.805351 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47q9j\" (UniqueName: \"kubernetes.io/projected/35e45f6b-d546-4818-94a1-c8490e83a786-kube-api-access-47q9j\") pod \"glance-operator-controller-manager-68c6d499cb-zqqxg\" (UID: \"35e45f6b-d546-4818-94a1-c8490e83a786\") " pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-zqqxg" Feb 17 14:00:51 crc kubenswrapper[4833]: I0217 14:00:51.831348 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6494cdbf8f-5lrx9"] Feb 17 14:00:51 crc kubenswrapper[4833]: I0217 14:00:51.832571 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-5lrx9" Feb 17 14:00:51 crc kubenswrapper[4833]: I0217 14:00:51.837409 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnq7d\" (UniqueName: \"kubernetes.io/projected/21fdadaa-0128-4daa-9dd8-2d4f7c600c99-kube-api-access-qnq7d\") pod \"designate-operator-controller-manager-55cc45767f-hdxb7\" (UID: \"21fdadaa-0128-4daa-9dd8-2d4f7c600c99\") " pod="openstack-operators/designate-operator-controller-manager-55cc45767f-hdxb7" Feb 17 14:00:51 crc kubenswrapper[4833]: I0217 14:00:51.858268 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-7mljs" Feb 17 14:00:51 crc kubenswrapper[4833]: I0217 14:00:51.862106 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkllj\" (UniqueName: \"kubernetes.io/projected/df534d1f-b343-4090-b8ad-114864ea82ec-kube-api-access-pkllj\") pod \"cinder-operator-controller-manager-57746b5ff9-jjchm\" (UID: \"df534d1f-b343-4090-b8ad-114864ea82ec\") " pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-jjchm" Feb 17 14:00:51 crc kubenswrapper[4833]: I0217 14:00:51.872152 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7n9m\" (UniqueName: \"kubernetes.io/projected/6504a5be-090f-4500-adaa-9ace23ba69f3-kube-api-access-h7n9m\") pod \"barbican-operator-controller-manager-c4b7d6946-trcbs\" (UID: \"6504a5be-090f-4500-adaa-9ace23ba69f3\") " pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-trcbs" Feb 17 14:00:51 crc kubenswrapper[4833]: I0217 14:00:51.885448 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-54fb488b88-kkrrz"] Feb 17 14:00:51 crc kubenswrapper[4833]: I0217 14:00:51.887002 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-jjchm" Feb 17 14:00:51 crc kubenswrapper[4833]: I0217 14:00:51.893583 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-hdxb7" Feb 17 14:00:51 crc kubenswrapper[4833]: I0217 14:00:51.902952 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6494cdbf8f-5lrx9"] Feb 17 14:00:51 crc kubenswrapper[4833]: I0217 14:00:51.907779 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47q9j\" (UniqueName: \"kubernetes.io/projected/35e45f6b-d546-4818-94a1-c8490e83a786-kube-api-access-47q9j\") pod \"glance-operator-controller-manager-68c6d499cb-zqqxg\" (UID: \"35e45f6b-d546-4818-94a1-c8490e83a786\") " pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-zqqxg" Feb 17 14:00:51 crc kubenswrapper[4833]: I0217 14:00:51.907844 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnvgn\" (UniqueName: \"kubernetes.io/projected/2acb502c-ae3f-4b68-b2c8-aaebcddc54d8-kube-api-access-dnvgn\") pod \"horizon-operator-controller-manager-54fb488b88-kkrrz\" (UID: \"2acb502c-ae3f-4b68-b2c8-aaebcddc54d8\") " pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-kkrrz" Feb 17 14:00:51 crc kubenswrapper[4833]: I0217 14:00:51.907867 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgkq5\" (UniqueName: \"kubernetes.io/projected/ed6e734c-3eb1-47d0-ad82-aa5c934da55a-kube-api-access-sgkq5\") pod \"infra-operator-controller-manager-66d6b5f488-87mt8\" (UID: \"ed6e734c-3eb1-47d0-ad82-aa5c934da55a\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-87mt8" Feb 17 14:00:51 crc kubenswrapper[4833]: I0217 14:00:51.907885 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ed6e734c-3eb1-47d0-ad82-aa5c934da55a-cert\") pod \"infra-operator-controller-manager-66d6b5f488-87mt8\" (UID: \"ed6e734c-3eb1-47d0-ad82-aa5c934da55a\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-87mt8" Feb 17 14:00:51 crc kubenswrapper[4833]: I0217 14:00:51.907904 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkltg\" (UniqueName: \"kubernetes.io/projected/d98ee2cd-c429-474d-a6a8-a7f9bc1ed559-kube-api-access-hkltg\") pod \"heat-operator-controller-manager-9595d6797-798bv\" (UID: \"d98ee2cd-c429-474d-a6a8-a7f9bc1ed559\") " pod="openstack-operators/heat-operator-controller-manager-9595d6797-798bv" Feb 17 14:00:51 crc kubenswrapper[4833]: E0217 14:00:51.908351 4833 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 14:00:51 crc kubenswrapper[4833]: E0217 14:00:51.908393 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed6e734c-3eb1-47d0-ad82-aa5c934da55a-cert podName:ed6e734c-3eb1-47d0-ad82-aa5c934da55a nodeName:}" failed. No retries permitted until 2026-02-17 14:00:52.408378394 +0000 UTC m=+942.043477827 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ed6e734c-3eb1-47d0-ad82-aa5c934da55a-cert") pod "infra-operator-controller-manager-66d6b5f488-87mt8" (UID: "ed6e734c-3eb1-47d0-ad82-aa5c934da55a") : secret "infra-operator-webhook-server-cert" not found Feb 17 14:00:51 crc kubenswrapper[4833]: I0217 14:00:51.914306 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-6c78d668d5-pc29h"] Feb 17 14:00:51 crc kubenswrapper[4833]: I0217 14:00:51.915263 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-pc29h" Feb 17 14:00:51 crc kubenswrapper[4833]: I0217 14:00:51.928085 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-96fff9cb8-rq2hb"] Feb 17 14:00:51 crc kubenswrapper[4833]: I0217 14:00:51.929831 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-rq2hb" Feb 17 14:00:51 crc kubenswrapper[4833]: I0217 14:00:51.929887 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-wmvwq" Feb 17 14:00:51 crc kubenswrapper[4833]: I0217 14:00:51.930884 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-6c78d668d5-pc29h"] Feb 17 14:00:51 crc kubenswrapper[4833]: I0217 14:00:51.935227 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-px7q4" Feb 17 14:00:51 crc kubenswrapper[4833]: I0217 14:00:51.936484 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47q9j\" (UniqueName: \"kubernetes.io/projected/35e45f6b-d546-4818-94a1-c8490e83a786-kube-api-access-47q9j\") pod \"glance-operator-controller-manager-68c6d499cb-zqqxg\" (UID: \"35e45f6b-d546-4818-94a1-c8490e83a786\") " pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-zqqxg" Feb 17 14:00:51 crc kubenswrapper[4833]: I0217 14:00:51.940473 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-zqqxg" Feb 17 14:00:51 crc kubenswrapper[4833]: I0217 14:00:51.946387 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgkq5\" (UniqueName: \"kubernetes.io/projected/ed6e734c-3eb1-47d0-ad82-aa5c934da55a-kube-api-access-sgkq5\") pod \"infra-operator-controller-manager-66d6b5f488-87mt8\" (UID: \"ed6e734c-3eb1-47d0-ad82-aa5c934da55a\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-87mt8" Feb 17 14:00:51 crc kubenswrapper[4833]: I0217 14:00:51.954626 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkltg\" (UniqueName: \"kubernetes.io/projected/d98ee2cd-c429-474d-a6a8-a7f9bc1ed559-kube-api-access-hkltg\") pod \"heat-operator-controller-manager-9595d6797-798bv\" (UID: \"d98ee2cd-c429-474d-a6a8-a7f9bc1ed559\") " pod="openstack-operators/heat-operator-controller-manager-9595d6797-798bv" Feb 17 14:00:51 crc kubenswrapper[4833]: I0217 14:00:51.961093 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-96fff9cb8-rq2hb"] Feb 17 14:00:51 crc kubenswrapper[4833]: I0217 14:00:51.965961 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-9595d6797-798bv" Feb 17 14:00:51 crc kubenswrapper[4833]: I0217 14:00:51.987100 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-66997756f6-brrjj"] Feb 17 14:00:51 crc kubenswrapper[4833]: I0217 14:00:51.987947 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-brrjj" Feb 17 14:00:51 crc kubenswrapper[4833]: I0217 14:00:51.990173 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-54967dbbdf-bffkj"] Feb 17 14:00:51 crc kubenswrapper[4833]: I0217 14:00:51.991099 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-bffkj" Feb 17 14:00:51 crc kubenswrapper[4833]: I0217 14:00:51.991662 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-cxd2c" Feb 17 14:00:51 crc kubenswrapper[4833]: I0217 14:00:51.994420 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-l7t6w" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.003184 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-5ddd85db87-kst2n"] Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.008988 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnvgn\" (UniqueName: \"kubernetes.io/projected/2acb502c-ae3f-4b68-b2c8-aaebcddc54d8-kube-api-access-dnvgn\") pod \"horizon-operator-controller-manager-54fb488b88-kkrrz\" (UID: \"2acb502c-ae3f-4b68-b2c8-aaebcddc54d8\") " pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-kkrrz" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.009677 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8stcx\" (UniqueName: \"kubernetes.io/projected/19d72e8b-fb46-4e69-bcbf-90eae727d08f-kube-api-access-8stcx\") pod \"ironic-operator-controller-manager-6494cdbf8f-5lrx9\" (UID: \"19d72e8b-fb46-4e69-bcbf-90eae727d08f\") " pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-5lrx9" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.028318 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-kst2n" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.032430 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-n4ph5" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.040050 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnvgn\" (UniqueName: \"kubernetes.io/projected/2acb502c-ae3f-4b68-b2c8-aaebcddc54d8-kube-api-access-dnvgn\") pod \"horizon-operator-controller-manager-54fb488b88-kkrrz\" (UID: \"2acb502c-ae3f-4b68-b2c8-aaebcddc54d8\") " pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-kkrrz" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.040300 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-745bbbd77b-twx52"] Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.043600 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-54967dbbdf-bffkj"] Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.043843 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-twx52" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.045787 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-w57g9" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.053711 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5ddd85db87-kst2n"] Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.055179 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-kkrrz" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.060485 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-745bbbd77b-twx52"] Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.082897 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-66997756f6-brrjj"] Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.089618 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-tjdm7"] Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.090417 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-tjdm7" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.092767 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-h5bd9" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.092947 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.095173 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-85c99d655-s4gk9"] Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.096110 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-85c99d655-s4gk9" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.099580 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-q5zh7" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.101350 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-57bd55f9b7-dt8tb"] Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.102449 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-dt8tb" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.104866 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-vs7qf" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.111813 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2l8m5\" (UniqueName: \"kubernetes.io/projected/3c850072-fbc2-4f6a-a59c-51498bc5aef2-kube-api-access-2l8m5\") pod \"keystone-operator-controller-manager-6c78d668d5-pc29h\" (UID: \"3c850072-fbc2-4f6a-a59c-51498bc5aef2\") " pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-pc29h" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.111878 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8stcx\" (UniqueName: \"kubernetes.io/projected/19d72e8b-fb46-4e69-bcbf-90eae727d08f-kube-api-access-8stcx\") pod \"ironic-operator-controller-manager-6494cdbf8f-5lrx9\" (UID: \"19d72e8b-fb46-4e69-bcbf-90eae727d08f\") " pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-5lrx9" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.111915 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrs9n\" (UniqueName: \"kubernetes.io/projected/92da7367-c0eb-4ad3-a2d9-aebe2540cb11-kube-api-access-mrs9n\") pod \"mariadb-operator-controller-manager-66997756f6-brrjj\" (UID: \"92da7367-c0eb-4ad3-a2d9-aebe2540cb11\") " pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-brrjj" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.111986 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9gqz\" (UniqueName: \"kubernetes.io/projected/6768cf80-4466-4e34-b1f5-fd7b4728b6ff-kube-api-access-h9gqz\") pod \"neutron-operator-controller-manager-54967dbbdf-bffkj\" (UID: \"6768cf80-4466-4e34-b1f5-fd7b4728b6ff\") " pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-bffkj" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.112010 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jb9zm\" (UniqueName: \"kubernetes.io/projected/056f0c10-839f-4bdf-9b0e-2c6eabb209a8-kube-api-access-jb9zm\") pod \"manila-operator-controller-manager-96fff9cb8-rq2hb\" (UID: \"056f0c10-839f-4bdf-9b0e-2c6eabb209a8\") " pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-rq2hb" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.144024 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-tjdm7"] Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.161664 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8stcx\" (UniqueName: \"kubernetes.io/projected/19d72e8b-fb46-4e69-bcbf-90eae727d08f-kube-api-access-8stcx\") pod \"ironic-operator-controller-manager-6494cdbf8f-5lrx9\" (UID: \"19d72e8b-fb46-4e69-bcbf-90eae727d08f\") " pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-5lrx9" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.161905 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-trcbs" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.172226 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-85c99d655-s4gk9"] Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.205380 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-5lrx9" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.210426 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-79558bbfbf-7pspc"] Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.211942 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-7pspc" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.214751 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-mvh8n" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.217796 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhqj4\" (UniqueName: \"kubernetes.io/projected/22eaba8f-f822-42a0-a01f-4b1bdd5c0728-kube-api-access-fhqj4\") pod \"nova-operator-controller-manager-5ddd85db87-kst2n\" (UID: \"22eaba8f-f822-42a0-a01f-4b1bdd5c0728\") " pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-kst2n" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.217885 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2h6hk\" (UniqueName: \"kubernetes.io/projected/773b4953-21e2-42e2-b438-29b340f453ad-kube-api-access-2h6hk\") pod \"placement-operator-controller-manager-57bd55f9b7-dt8tb\" (UID: \"773b4953-21e2-42e2-b438-29b340f453ad\") " pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-dt8tb" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.218236 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/032e7e1b-b6e8-4a69-905a-cd01e516f155-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-tjdm7\" (UID: \"032e7e1b-b6e8-4a69-905a-cd01e516f155\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-tjdm7" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.218285 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqp9q\" (UniqueName: \"kubernetes.io/projected/032e7e1b-b6e8-4a69-905a-cd01e516f155-kube-api-access-mqp9q\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-tjdm7\" (UID: \"032e7e1b-b6e8-4a69-905a-cd01e516f155\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-tjdm7" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.218346 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9gqz\" (UniqueName: \"kubernetes.io/projected/6768cf80-4466-4e34-b1f5-fd7b4728b6ff-kube-api-access-h9gqz\") pod \"neutron-operator-controller-manager-54967dbbdf-bffkj\" (UID: \"6768cf80-4466-4e34-b1f5-fd7b4728b6ff\") " pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-bffkj" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.218389 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jb9zm\" (UniqueName: \"kubernetes.io/projected/056f0c10-839f-4bdf-9b0e-2c6eabb209a8-kube-api-access-jb9zm\") pod \"manila-operator-controller-manager-96fff9cb8-rq2hb\" (UID: \"056f0c10-839f-4bdf-9b0e-2c6eabb209a8\") " pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-rq2hb" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.218457 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltrpc\" (UniqueName: \"kubernetes.io/projected/c35ceddd-0211-4828-a16e-4ff8d9d9dcd1-kube-api-access-ltrpc\") pod \"ovn-operator-controller-manager-85c99d655-s4gk9\" (UID: \"c35ceddd-0211-4828-a16e-4ff8d9d9dcd1\") " pod="openstack-operators/ovn-operator-controller-manager-85c99d655-s4gk9" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.218492 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2l8m5\" (UniqueName: \"kubernetes.io/projected/3c850072-fbc2-4f6a-a59c-51498bc5aef2-kube-api-access-2l8m5\") pod \"keystone-operator-controller-manager-6c78d668d5-pc29h\" (UID: \"3c850072-fbc2-4f6a-a59c-51498bc5aef2\") " pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-pc29h" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.218557 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrs9n\" (UniqueName: \"kubernetes.io/projected/92da7367-c0eb-4ad3-a2d9-aebe2540cb11-kube-api-access-mrs9n\") pod \"mariadb-operator-controller-manager-66997756f6-brrjj\" (UID: \"92da7367-c0eb-4ad3-a2d9-aebe2540cb11\") " pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-brrjj" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.218627 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8kg2\" (UniqueName: \"kubernetes.io/projected/9985098c-6fe1-4a48-8aaf-6ef63f487c41-kube-api-access-d8kg2\") pod \"octavia-operator-controller-manager-745bbbd77b-twx52\" (UID: \"9985098c-6fe1-4a48-8aaf-6ef63f487c41\") " pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-twx52" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.221804 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-57bd55f9b7-dt8tb"] Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.237974 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-79558bbfbf-7pspc"] Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.249823 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jb9zm\" (UniqueName: \"kubernetes.io/projected/056f0c10-839f-4bdf-9b0e-2c6eabb209a8-kube-api-access-jb9zm\") pod \"manila-operator-controller-manager-96fff9cb8-rq2hb\" (UID: \"056f0c10-839f-4bdf-9b0e-2c6eabb209a8\") " pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-rq2hb" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.261589 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-56dc67d744-6sx42"] Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.264354 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrs9n\" (UniqueName: \"kubernetes.io/projected/92da7367-c0eb-4ad3-a2d9-aebe2540cb11-kube-api-access-mrs9n\") pod \"mariadb-operator-controller-manager-66997756f6-brrjj\" (UID: \"92da7367-c0eb-4ad3-a2d9-aebe2540cb11\") " pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-brrjj" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.264654 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9gqz\" (UniqueName: \"kubernetes.io/projected/6768cf80-4466-4e34-b1f5-fd7b4728b6ff-kube-api-access-h9gqz\") pod \"neutron-operator-controller-manager-54967dbbdf-bffkj\" (UID: \"6768cf80-4466-4e34-b1f5-fd7b4728b6ff\") " pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-bffkj" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.265349 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-6sx42" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.272863 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-56dc67d744-6sx42"] Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.278439 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-wzxhx" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.295755 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2l8m5\" (UniqueName: \"kubernetes.io/projected/3c850072-fbc2-4f6a-a59c-51498bc5aef2-kube-api-access-2l8m5\") pod \"keystone-operator-controller-manager-6c78d668d5-pc29h\" (UID: \"3c850072-fbc2-4f6a-a59c-51498bc5aef2\") " pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-pc29h" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.320762 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhqj4\" (UniqueName: \"kubernetes.io/projected/22eaba8f-f822-42a0-a01f-4b1bdd5c0728-kube-api-access-fhqj4\") pod \"nova-operator-controller-manager-5ddd85db87-kst2n\" (UID: \"22eaba8f-f822-42a0-a01f-4b1bdd5c0728\") " pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-kst2n" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.320820 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2h6hk\" (UniqueName: \"kubernetes.io/projected/773b4953-21e2-42e2-b438-29b340f453ad-kube-api-access-2h6hk\") pod \"placement-operator-controller-manager-57bd55f9b7-dt8tb\" (UID: \"773b4953-21e2-42e2-b438-29b340f453ad\") " pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-dt8tb" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.320898 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/032e7e1b-b6e8-4a69-905a-cd01e516f155-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-tjdm7\" (UID: \"032e7e1b-b6e8-4a69-905a-cd01e516f155\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-tjdm7" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.320927 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqp9q\" (UniqueName: \"kubernetes.io/projected/032e7e1b-b6e8-4a69-905a-cd01e516f155-kube-api-access-mqp9q\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-tjdm7\" (UID: \"032e7e1b-b6e8-4a69-905a-cd01e516f155\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-tjdm7" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.321000 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltrpc\" (UniqueName: \"kubernetes.io/projected/c35ceddd-0211-4828-a16e-4ff8d9d9dcd1-kube-api-access-ltrpc\") pod \"ovn-operator-controller-manager-85c99d655-s4gk9\" (UID: \"c35ceddd-0211-4828-a16e-4ff8d9d9dcd1\") " pod="openstack-operators/ovn-operator-controller-manager-85c99d655-s4gk9" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.321086 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blz59\" (UniqueName: \"kubernetes.io/projected/c55ae282-60fe-4b05-9bee-5539b9026800-kube-api-access-blz59\") pod \"swift-operator-controller-manager-79558bbfbf-7pspc\" (UID: \"c55ae282-60fe-4b05-9bee-5539b9026800\") " pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-7pspc" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.321120 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8kg2\" (UniqueName: \"kubernetes.io/projected/9985098c-6fe1-4a48-8aaf-6ef63f487c41-kube-api-access-d8kg2\") pod \"octavia-operator-controller-manager-745bbbd77b-twx52\" (UID: \"9985098c-6fe1-4a48-8aaf-6ef63f487c41\") " pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-twx52" Feb 17 14:00:52 crc kubenswrapper[4833]: E0217 14:00:52.321855 4833 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 14:00:52 crc kubenswrapper[4833]: E0217 14:00:52.321910 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/032e7e1b-b6e8-4a69-905a-cd01e516f155-cert podName:032e7e1b-b6e8-4a69-905a-cd01e516f155 nodeName:}" failed. No retries permitted until 2026-02-17 14:00:52.821889219 +0000 UTC m=+942.456988652 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/032e7e1b-b6e8-4a69-905a-cd01e516f155-cert") pod "openstack-baremetal-operator-controller-manager-c5677dc5d-tjdm7" (UID: "032e7e1b-b6e8-4a69-905a-cd01e516f155") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.341128 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-8467ccb4c8-n4rt9"] Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.342192 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-n4rt9" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.349742 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-xw6pd" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.357786 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhqj4\" (UniqueName: \"kubernetes.io/projected/22eaba8f-f822-42a0-a01f-4b1bdd5c0728-kube-api-access-fhqj4\") pod \"nova-operator-controller-manager-5ddd85db87-kst2n\" (UID: \"22eaba8f-f822-42a0-a01f-4b1bdd5c0728\") " pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-kst2n" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.359549 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-pc29h" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.387382 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8kg2\" (UniqueName: \"kubernetes.io/projected/9985098c-6fe1-4a48-8aaf-6ef63f487c41-kube-api-access-d8kg2\") pod \"octavia-operator-controller-manager-745bbbd77b-twx52\" (UID: \"9985098c-6fe1-4a48-8aaf-6ef63f487c41\") " pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-twx52" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.394695 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-8467ccb4c8-n4rt9"] Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.431954 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ed6e734c-3eb1-47d0-ad82-aa5c934da55a-cert\") pod \"infra-operator-controller-manager-66d6b5f488-87mt8\" (UID: \"ed6e734c-3eb1-47d0-ad82-aa5c934da55a\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-87mt8" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.432028 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lgx5\" (UniqueName: \"kubernetes.io/projected/46d10b37-a41e-4063-8a7c-d1f8bf36847f-kube-api-access-8lgx5\") pod \"telemetry-operator-controller-manager-56dc67d744-6sx42\" (UID: \"46d10b37-a41e-4063-8a7c-d1f8bf36847f\") " pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-6sx42" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.432075 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blz59\" (UniqueName: \"kubernetes.io/projected/c55ae282-60fe-4b05-9bee-5539b9026800-kube-api-access-blz59\") pod \"swift-operator-controller-manager-79558bbfbf-7pspc\" (UID: \"c55ae282-60fe-4b05-9bee-5539b9026800\") " pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-7pspc" Feb 17 14:00:52 crc kubenswrapper[4833]: E0217 14:00:52.432655 4833 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 14:00:52 crc kubenswrapper[4833]: E0217 14:00:52.432700 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed6e734c-3eb1-47d0-ad82-aa5c934da55a-cert podName:ed6e734c-3eb1-47d0-ad82-aa5c934da55a nodeName:}" failed. No retries permitted until 2026-02-17 14:00:53.432687266 +0000 UTC m=+943.067786699 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ed6e734c-3eb1-47d0-ad82-aa5c934da55a-cert") pod "infra-operator-controller-manager-66d6b5f488-87mt8" (UID: "ed6e734c-3eb1-47d0-ad82-aa5c934da55a") : secret "infra-operator-webhook-server-cert" not found Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.440869 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqp9q\" (UniqueName: \"kubernetes.io/projected/032e7e1b-b6e8-4a69-905a-cd01e516f155-kube-api-access-mqp9q\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-tjdm7\" (UID: \"032e7e1b-b6e8-4a69-905a-cd01e516f155\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-tjdm7" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.432777 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2h6hk\" (UniqueName: \"kubernetes.io/projected/773b4953-21e2-42e2-b438-29b340f453ad-kube-api-access-2h6hk\") pod \"placement-operator-controller-manager-57bd55f9b7-dt8tb\" (UID: \"773b4953-21e2-42e2-b438-29b340f453ad\") " pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-dt8tb" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.443557 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltrpc\" (UniqueName: \"kubernetes.io/projected/c35ceddd-0211-4828-a16e-4ff8d9d9dcd1-kube-api-access-ltrpc\") pod \"ovn-operator-controller-manager-85c99d655-s4gk9\" (UID: \"c35ceddd-0211-4828-a16e-4ff8d9d9dcd1\") " pod="openstack-operators/ovn-operator-controller-manager-85c99d655-s4gk9" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.536108 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blz59\" (UniqueName: \"kubernetes.io/projected/c55ae282-60fe-4b05-9bee-5539b9026800-kube-api-access-blz59\") pod \"swift-operator-controller-manager-79558bbfbf-7pspc\" (UID: \"c55ae282-60fe-4b05-9bee-5539b9026800\") " pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-7pspc" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.623870 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-rq2hb" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.624129 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-brrjj" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.624536 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-bffkj" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.624826 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-kst2n" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.630786 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-dt8tb" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.630968 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-twx52" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.631315 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lgx5\" (UniqueName: \"kubernetes.io/projected/46d10b37-a41e-4063-8a7c-d1f8bf36847f-kube-api-access-8lgx5\") pod \"telemetry-operator-controller-manager-56dc67d744-6sx42\" (UID: \"46d10b37-a41e-4063-8a7c-d1f8bf36847f\") " pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-6sx42" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.631418 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfmtr\" (UniqueName: \"kubernetes.io/projected/6f6fed97-8c9b-4262-962a-912fe2777ffc-kube-api-access-qfmtr\") pod \"test-operator-controller-manager-8467ccb4c8-n4rt9\" (UID: \"6f6fed97-8c9b-4262-962a-912fe2777ffc\") " pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-n4rt9" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.636429 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-59669cd6b8-hrbgp"] Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.637658 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-59669cd6b8-hrbgp" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.642460 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-5gdvl" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.653102 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-59669cd6b8-hrbgp"] Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.680848 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lgx5\" (UniqueName: \"kubernetes.io/projected/46d10b37-a41e-4063-8a7c-d1f8bf36847f-kube-api-access-8lgx5\") pod \"telemetry-operator-controller-manager-56dc67d744-6sx42\" (UID: \"46d10b37-a41e-4063-8a7c-d1f8bf36847f\") " pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-6sx42" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.714458 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-6sx42" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.726180 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5856dc4bfc-jh56r"] Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.727255 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5856dc4bfc-jh56r" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.735192 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-5xrzh" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.735518 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.735554 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.742135 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfmtr\" (UniqueName: \"kubernetes.io/projected/6f6fed97-8c9b-4262-962a-912fe2777ffc-kube-api-access-qfmtr\") pod \"test-operator-controller-manager-8467ccb4c8-n4rt9\" (UID: \"6f6fed97-8c9b-4262-962a-912fe2777ffc\") " pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-n4rt9" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.744006 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-85c99d655-s4gk9" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.756436 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5856dc4bfc-jh56r"] Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.767197 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-57746b5ff9-jjchm"] Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.773282 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bdv7p"] Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.785187 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bdv7p" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.790348 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-g2crz" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.794317 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bdv7p"] Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.811081 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfmtr\" (UniqueName: \"kubernetes.io/projected/6f6fed97-8c9b-4262-962a-912fe2777ffc-kube-api-access-qfmtr\") pod \"test-operator-controller-manager-8467ccb4c8-n4rt9\" (UID: \"6f6fed97-8c9b-4262-962a-912fe2777ffc\") " pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-n4rt9" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.816611 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-7pspc" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.844555 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pggtw\" (UniqueName: \"kubernetes.io/projected/a40de3a4-5ce4-4c9b-a022-acf3aeb9b852-kube-api-access-pggtw\") pod \"watcher-operator-controller-manager-59669cd6b8-hrbgp\" (UID: \"a40de3a4-5ce4-4c9b-a022-acf3aeb9b852\") " pod="openstack-operators/watcher-operator-controller-manager-59669cd6b8-hrbgp" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.844650 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eaf6f3b9-35a6-4048-94cb-53563475161a-metrics-certs\") pod \"openstack-operator-controller-manager-5856dc4bfc-jh56r\" (UID: \"eaf6f3b9-35a6-4048-94cb-53563475161a\") " pod="openstack-operators/openstack-operator-controller-manager-5856dc4bfc-jh56r" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.844769 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5zhk\" (UniqueName: \"kubernetes.io/projected/eaf6f3b9-35a6-4048-94cb-53563475161a-kube-api-access-l5zhk\") pod \"openstack-operator-controller-manager-5856dc4bfc-jh56r\" (UID: \"eaf6f3b9-35a6-4048-94cb-53563475161a\") " pod="openstack-operators/openstack-operator-controller-manager-5856dc4bfc-jh56r" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.844838 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/eaf6f3b9-35a6-4048-94cb-53563475161a-webhook-certs\") pod \"openstack-operator-controller-manager-5856dc4bfc-jh56r\" (UID: \"eaf6f3b9-35a6-4048-94cb-53563475161a\") " pod="openstack-operators/openstack-operator-controller-manager-5856dc4bfc-jh56r" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.844924 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/032e7e1b-b6e8-4a69-905a-cd01e516f155-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-tjdm7\" (UID: \"032e7e1b-b6e8-4a69-905a-cd01e516f155\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-tjdm7" Feb 17 14:00:52 crc kubenswrapper[4833]: E0217 14:00:52.845139 4833 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 14:00:52 crc kubenswrapper[4833]: E0217 14:00:52.845225 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/032e7e1b-b6e8-4a69-905a-cd01e516f155-cert podName:032e7e1b-b6e8-4a69-905a-cd01e516f155 nodeName:}" failed. No retries permitted until 2026-02-17 14:00:53.845185391 +0000 UTC m=+943.480284834 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/032e7e1b-b6e8-4a69-905a-cd01e516f155-cert") pod "openstack-baremetal-operator-controller-manager-c5677dc5d-tjdm7" (UID: "032e7e1b-b6e8-4a69-905a-cd01e516f155") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.947557 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5zhk\" (UniqueName: \"kubernetes.io/projected/eaf6f3b9-35a6-4048-94cb-53563475161a-kube-api-access-l5zhk\") pod \"openstack-operator-controller-manager-5856dc4bfc-jh56r\" (UID: \"eaf6f3b9-35a6-4048-94cb-53563475161a\") " pod="openstack-operators/openstack-operator-controller-manager-5856dc4bfc-jh56r" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.949491 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/eaf6f3b9-35a6-4048-94cb-53563475161a-webhook-certs\") pod \"openstack-operator-controller-manager-5856dc4bfc-jh56r\" (UID: \"eaf6f3b9-35a6-4048-94cb-53563475161a\") " pod="openstack-operators/openstack-operator-controller-manager-5856dc4bfc-jh56r" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.949552 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9znz9\" (UniqueName: \"kubernetes.io/projected/c42495df-6d52-41f8-8b47-764a31284526-kube-api-access-9znz9\") pod \"rabbitmq-cluster-operator-manager-668c99d594-bdv7p\" (UID: \"c42495df-6d52-41f8-8b47-764a31284526\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bdv7p" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.949703 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pggtw\" (UniqueName: \"kubernetes.io/projected/a40de3a4-5ce4-4c9b-a022-acf3aeb9b852-kube-api-access-pggtw\") pod \"watcher-operator-controller-manager-59669cd6b8-hrbgp\" (UID: \"a40de3a4-5ce4-4c9b-a022-acf3aeb9b852\") " pod="openstack-operators/watcher-operator-controller-manager-59669cd6b8-hrbgp" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.949757 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eaf6f3b9-35a6-4048-94cb-53563475161a-metrics-certs\") pod \"openstack-operator-controller-manager-5856dc4bfc-jh56r\" (UID: \"eaf6f3b9-35a6-4048-94cb-53563475161a\") " pod="openstack-operators/openstack-operator-controller-manager-5856dc4bfc-jh56r" Feb 17 14:00:52 crc kubenswrapper[4833]: E0217 14:00:52.950344 4833 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 17 14:00:52 crc kubenswrapper[4833]: E0217 14:00:52.950680 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eaf6f3b9-35a6-4048-94cb-53563475161a-metrics-certs podName:eaf6f3b9-35a6-4048-94cb-53563475161a nodeName:}" failed. No retries permitted until 2026-02-17 14:00:53.450659877 +0000 UTC m=+943.085759310 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/eaf6f3b9-35a6-4048-94cb-53563475161a-metrics-certs") pod "openstack-operator-controller-manager-5856dc4bfc-jh56r" (UID: "eaf6f3b9-35a6-4048-94cb-53563475161a") : secret "metrics-server-cert" not found Feb 17 14:00:52 crc kubenswrapper[4833]: E0217 14:00:52.950897 4833 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 14:00:52 crc kubenswrapper[4833]: E0217 14:00:52.950962 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eaf6f3b9-35a6-4048-94cb-53563475161a-webhook-certs podName:eaf6f3b9-35a6-4048-94cb-53563475161a nodeName:}" failed. No retries permitted until 2026-02-17 14:00:53.450942825 +0000 UTC m=+943.086042258 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/eaf6f3b9-35a6-4048-94cb-53563475161a-webhook-certs") pod "openstack-operator-controller-manager-5856dc4bfc-jh56r" (UID: "eaf6f3b9-35a6-4048-94cb-53563475161a") : secret "webhook-server-cert" not found Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.987260 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pggtw\" (UniqueName: \"kubernetes.io/projected/a40de3a4-5ce4-4c9b-a022-acf3aeb9b852-kube-api-access-pggtw\") pod \"watcher-operator-controller-manager-59669cd6b8-hrbgp\" (UID: \"a40de3a4-5ce4-4c9b-a022-acf3aeb9b852\") " pod="openstack-operators/watcher-operator-controller-manager-59669cd6b8-hrbgp" Feb 17 14:00:52 crc kubenswrapper[4833]: I0217 14:00:52.988017 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5zhk\" (UniqueName: \"kubernetes.io/projected/eaf6f3b9-35a6-4048-94cb-53563475161a-kube-api-access-l5zhk\") pod \"openstack-operator-controller-manager-5856dc4bfc-jh56r\" (UID: \"eaf6f3b9-35a6-4048-94cb-53563475161a\") " pod="openstack-operators/openstack-operator-controller-manager-5856dc4bfc-jh56r" Feb 17 14:00:53 crc kubenswrapper[4833]: I0217 14:00:53.050850 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9znz9\" (UniqueName: \"kubernetes.io/projected/c42495df-6d52-41f8-8b47-764a31284526-kube-api-access-9znz9\") pod \"rabbitmq-cluster-operator-manager-668c99d594-bdv7p\" (UID: \"c42495df-6d52-41f8-8b47-764a31284526\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bdv7p" Feb 17 14:00:53 crc kubenswrapper[4833]: I0217 14:00:53.095415 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9znz9\" (UniqueName: \"kubernetes.io/projected/c42495df-6d52-41f8-8b47-764a31284526-kube-api-access-9znz9\") pod \"rabbitmq-cluster-operator-manager-668c99d594-bdv7p\" (UID: \"c42495df-6d52-41f8-8b47-764a31284526\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bdv7p" Feb 17 14:00:53 crc kubenswrapper[4833]: I0217 14:00:53.160917 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-n4rt9" Feb 17 14:00:53 crc kubenswrapper[4833]: I0217 14:00:53.174967 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-59669cd6b8-hrbgp" Feb 17 14:00:53 crc kubenswrapper[4833]: I0217 14:00:53.299252 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bdv7p" Feb 17 14:00:53 crc kubenswrapper[4833]: I0217 14:00:53.354569 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-68c6d499cb-zqqxg"] Feb 17 14:00:53 crc kubenswrapper[4833]: I0217 14:00:53.359565 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-9595d6797-798bv"] Feb 17 14:00:53 crc kubenswrapper[4833]: I0217 14:00:53.458624 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/eaf6f3b9-35a6-4048-94cb-53563475161a-webhook-certs\") pod \"openstack-operator-controller-manager-5856dc4bfc-jh56r\" (UID: \"eaf6f3b9-35a6-4048-94cb-53563475161a\") " pod="openstack-operators/openstack-operator-controller-manager-5856dc4bfc-jh56r" Feb 17 14:00:53 crc kubenswrapper[4833]: I0217 14:00:53.458716 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ed6e734c-3eb1-47d0-ad82-aa5c934da55a-cert\") pod \"infra-operator-controller-manager-66d6b5f488-87mt8\" (UID: \"ed6e734c-3eb1-47d0-ad82-aa5c934da55a\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-87mt8" Feb 17 14:00:53 crc kubenswrapper[4833]: I0217 14:00:53.458752 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eaf6f3b9-35a6-4048-94cb-53563475161a-metrics-certs\") pod \"openstack-operator-controller-manager-5856dc4bfc-jh56r\" (UID: \"eaf6f3b9-35a6-4048-94cb-53563475161a\") " pod="openstack-operators/openstack-operator-controller-manager-5856dc4bfc-jh56r" Feb 17 14:00:53 crc kubenswrapper[4833]: E0217 14:00:53.458867 4833 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 17 14:00:53 crc kubenswrapper[4833]: E0217 14:00:53.458909 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eaf6f3b9-35a6-4048-94cb-53563475161a-metrics-certs podName:eaf6f3b9-35a6-4048-94cb-53563475161a nodeName:}" failed. No retries permitted until 2026-02-17 14:00:54.458896851 +0000 UTC m=+944.093996284 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/eaf6f3b9-35a6-4048-94cb-53563475161a-metrics-certs") pod "openstack-operator-controller-manager-5856dc4bfc-jh56r" (UID: "eaf6f3b9-35a6-4048-94cb-53563475161a") : secret "metrics-server-cert" not found Feb 17 14:00:53 crc kubenswrapper[4833]: E0217 14:00:53.459683 4833 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 14:00:53 crc kubenswrapper[4833]: E0217 14:00:53.459708 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eaf6f3b9-35a6-4048-94cb-53563475161a-webhook-certs podName:eaf6f3b9-35a6-4048-94cb-53563475161a nodeName:}" failed. No retries permitted until 2026-02-17 14:00:54.459700043 +0000 UTC m=+944.094799476 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/eaf6f3b9-35a6-4048-94cb-53563475161a-webhook-certs") pod "openstack-operator-controller-manager-5856dc4bfc-jh56r" (UID: "eaf6f3b9-35a6-4048-94cb-53563475161a") : secret "webhook-server-cert" not found Feb 17 14:00:53 crc kubenswrapper[4833]: E0217 14:00:53.459969 4833 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 14:00:53 crc kubenswrapper[4833]: E0217 14:00:53.459991 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed6e734c-3eb1-47d0-ad82-aa5c934da55a-cert podName:ed6e734c-3eb1-47d0-ad82-aa5c934da55a nodeName:}" failed. No retries permitted until 2026-02-17 14:00:55.459984002 +0000 UTC m=+945.095083435 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ed6e734c-3eb1-47d0-ad82-aa5c934da55a-cert") pod "infra-operator-controller-manager-66d6b5f488-87mt8" (UID: "ed6e734c-3eb1-47d0-ad82-aa5c934da55a") : secret "infra-operator-webhook-server-cert" not found Feb 17 14:00:53 crc kubenswrapper[4833]: I0217 14:00:53.534559 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-55cc45767f-hdxb7"] Feb 17 14:00:53 crc kubenswrapper[4833]: I0217 14:00:53.535101 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-zqqxg" event={"ID":"35e45f6b-d546-4818-94a1-c8490e83a786","Type":"ContainerStarted","Data":"ec0b6689fd0ba4002cb1cb86057a40eff56b8891689d6b56d5a406895742b19a"} Feb 17 14:00:53 crc kubenswrapper[4833]: I0217 14:00:53.541940 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-jjchm" event={"ID":"df534d1f-b343-4090-b8ad-114864ea82ec","Type":"ContainerStarted","Data":"6eb01d17c28f9cd32b925eb9759ef32d18e8d6d12762f663e46fa02812fa1f34"} Feb 17 14:00:53 crc kubenswrapper[4833]: I0217 14:00:53.545505 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-9595d6797-798bv" event={"ID":"d98ee2cd-c429-474d-a6a8-a7f9bc1ed559","Type":"ContainerStarted","Data":"d6fc2a4fcff7407d5a8a9d573ada3e44732bc1d5a2607472642d695b5c674aaf"} Feb 17 14:00:53 crc kubenswrapper[4833]: W0217 14:00:53.564470 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21fdadaa_0128_4daa_9dd8_2d4f7c600c99.slice/crio-13aeb34b4873731390825a9238844fec51e72aee3e12ea2cf29d159c0750d783 WatchSource:0}: Error finding container 13aeb34b4873731390825a9238844fec51e72aee3e12ea2cf29d159c0750d783: Status 404 returned error can't find the container with id 13aeb34b4873731390825a9238844fec51e72aee3e12ea2cf29d159c0750d783 Feb 17 14:00:53 crc kubenswrapper[4833]: I0217 14:00:53.653598 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-54fb488b88-kkrrz"] Feb 17 14:00:53 crc kubenswrapper[4833]: W0217 14:00:53.659786 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2acb502c_ae3f_4b68_b2c8_aaebcddc54d8.slice/crio-68e8fc406595c56addaeaf762ca4eba64b5b1ba99b558dc3c627e6f65a572d67 WatchSource:0}: Error finding container 68e8fc406595c56addaeaf762ca4eba64b5b1ba99b558dc3c627e6f65a572d67: Status 404 returned error can't find the container with id 68e8fc406595c56addaeaf762ca4eba64b5b1ba99b558dc3c627e6f65a572d67 Feb 17 14:00:53 crc kubenswrapper[4833]: I0217 14:00:53.808919 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-66997756f6-brrjj"] Feb 17 14:00:53 crc kubenswrapper[4833]: I0217 14:00:53.835493 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6494cdbf8f-5lrx9"] Feb 17 14:00:53 crc kubenswrapper[4833]: I0217 14:00:53.844751 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-6c78d668d5-pc29h"] Feb 17 14:00:53 crc kubenswrapper[4833]: I0217 14:00:53.850657 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-c4b7d6946-trcbs"] Feb 17 14:00:53 crc kubenswrapper[4833]: W0217 14:00:53.851923 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6504a5be_090f_4500_adaa_9ace23ba69f3.slice/crio-b7c5dc68d0dfe2f9bf41fd9dde10c8be765f0a4421c7f6720598e8a3859f6f84 WatchSource:0}: Error finding container b7c5dc68d0dfe2f9bf41fd9dde10c8be765f0a4421c7f6720598e8a3859f6f84: Status 404 returned error can't find the container with id b7c5dc68d0dfe2f9bf41fd9dde10c8be765f0a4421c7f6720598e8a3859f6f84 Feb 17 14:00:53 crc kubenswrapper[4833]: I0217 14:00:53.874217 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/032e7e1b-b6e8-4a69-905a-cd01e516f155-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-tjdm7\" (UID: \"032e7e1b-b6e8-4a69-905a-cd01e516f155\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-tjdm7" Feb 17 14:00:53 crc kubenswrapper[4833]: E0217 14:00:53.874600 4833 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 14:00:53 crc kubenswrapper[4833]: E0217 14:00:53.874741 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/032e7e1b-b6e8-4a69-905a-cd01e516f155-cert podName:032e7e1b-b6e8-4a69-905a-cd01e516f155 nodeName:}" failed. No retries permitted until 2026-02-17 14:00:55.874727521 +0000 UTC m=+945.509826954 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/032e7e1b-b6e8-4a69-905a-cd01e516f155-cert") pod "openstack-baremetal-operator-controller-manager-c5677dc5d-tjdm7" (UID: "032e7e1b-b6e8-4a69-905a-cd01e516f155") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 14:00:54 crc kubenswrapper[4833]: I0217 14:00:54.061305 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-57bd55f9b7-dt8tb"] Feb 17 14:00:54 crc kubenswrapper[4833]: I0217 14:00:54.071633 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-745bbbd77b-twx52"] Feb 17 14:00:54 crc kubenswrapper[4833]: I0217 14:00:54.089910 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5ddd85db87-kst2n"] Feb 17 14:00:54 crc kubenswrapper[4833]: I0217 14:00:54.107733 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-85c99d655-s4gk9"] Feb 17 14:00:54 crc kubenswrapper[4833]: I0217 14:00:54.111776 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-96fff9cb8-rq2hb"] Feb 17 14:00:54 crc kubenswrapper[4833]: W0217 14:00:54.116514 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22eaba8f_f822_42a0_a01f_4b1bdd5c0728.slice/crio-d3c47671e09d5609f5d4d2c7ca6463a6bdbe1243b9a18599499f16e2b79f61cf WatchSource:0}: Error finding container d3c47671e09d5609f5d4d2c7ca6463a6bdbe1243b9a18599499f16e2b79f61cf: Status 404 returned error can't find the container with id d3c47671e09d5609f5d4d2c7ca6463a6bdbe1243b9a18599499f16e2b79f61cf Feb 17 14:00:54 crc kubenswrapper[4833]: I0217 14:00:54.130902 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-54967dbbdf-bffkj"] Feb 17 14:00:54 crc kubenswrapper[4833]: I0217 14:00:54.135441 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-79558bbfbf-7pspc"] Feb 17 14:00:54 crc kubenswrapper[4833]: E0217 14:00:54.160615 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:015f7f2d8b5afc85e51dd3b2e02a4cfb8294b543437315b291006d2416764db9,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-blz59,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-79558bbfbf-7pspc_openstack-operators(c55ae282-60fe-4b05-9bee-5539b9026800): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 17 14:00:54 crc kubenswrapper[4833]: E0217 14:00:54.161250 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:8d65a2becf279bb8b6b1a09e273d9a2cb1ff41f85bc42ef2e4d573cbb8cbac89,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h9gqz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-54967dbbdf-bffkj_openstack-operators(6768cf80-4466-4e34-b1f5-fd7b4728b6ff): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 17 14:00:54 crc kubenswrapper[4833]: E0217 14:00:54.163067 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-bffkj" podUID="6768cf80-4466-4e34-b1f5-fd7b4728b6ff" Feb 17 14:00:54 crc kubenswrapper[4833]: E0217 14:00:54.163112 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-7pspc" podUID="c55ae282-60fe-4b05-9bee-5539b9026800" Feb 17 14:00:54 crc kubenswrapper[4833]: I0217 14:00:54.267516 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-59669cd6b8-hrbgp"] Feb 17 14:00:54 crc kubenswrapper[4833]: I0217 14:00:54.275444 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bdv7p"] Feb 17 14:00:54 crc kubenswrapper[4833]: W0217 14:00:54.283251 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc42495df_6d52_41f8_8b47_764a31284526.slice/crio-da1159b8387f324e0e6f6075ad887bbcc40527f27e3c7bc9bfdeb89a36471835 WatchSource:0}: Error finding container da1159b8387f324e0e6f6075ad887bbcc40527f27e3c7bc9bfdeb89a36471835: Status 404 returned error can't find the container with id da1159b8387f324e0e6f6075ad887bbcc40527f27e3c7bc9bfdeb89a36471835 Feb 17 14:00:54 crc kubenswrapper[4833]: I0217 14:00:54.285610 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-56dc67d744-6sx42"] Feb 17 14:00:54 crc kubenswrapper[4833]: E0217 14:00:54.286024 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9znz9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-bdv7p_openstack-operators(c42495df-6d52-41f8-8b47-764a31284526): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 17 14:00:54 crc kubenswrapper[4833]: E0217 14:00:54.287619 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bdv7p" podUID="c42495df-6d52-41f8-8b47-764a31284526" Feb 17 14:00:54 crc kubenswrapper[4833]: I0217 14:00:54.291220 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-8467ccb4c8-n4rt9"] Feb 17 14:00:54 crc kubenswrapper[4833]: E0217 14:00:54.297392 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:4b10e23983c3ec518c35aeabb33ac228063e56c81b4d7a100c5d91139ad7d7fc,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8lgx5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-56dc67d744-6sx42_openstack-operators(46d10b37-a41e-4063-8a7c-d1f8bf36847f): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 17 14:00:54 crc kubenswrapper[4833]: E0217 14:00:54.298842 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-6sx42" podUID="46d10b37-a41e-4063-8a7c-d1f8bf36847f" Feb 17 14:00:54 crc kubenswrapper[4833]: E0217 14:00:54.299031 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:f9b2e00617c7f219932ea0d5e2bb795cc4361a335a72743077948d8108695c27,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qfmtr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-8467ccb4c8-n4rt9_openstack-operators(6f6fed97-8c9b-4262-962a-912fe2777ffc): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 17 14:00:54 crc kubenswrapper[4833]: E0217 14:00:54.300642 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-n4rt9" podUID="6f6fed97-8c9b-4262-962a-912fe2777ffc" Feb 17 14:00:54 crc kubenswrapper[4833]: I0217 14:00:54.484348 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/eaf6f3b9-35a6-4048-94cb-53563475161a-webhook-certs\") pod \"openstack-operator-controller-manager-5856dc4bfc-jh56r\" (UID: \"eaf6f3b9-35a6-4048-94cb-53563475161a\") " pod="openstack-operators/openstack-operator-controller-manager-5856dc4bfc-jh56r" Feb 17 14:00:54 crc kubenswrapper[4833]: E0217 14:00:54.484680 4833 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 14:00:54 crc kubenswrapper[4833]: E0217 14:00:54.484756 4833 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 17 14:00:54 crc kubenswrapper[4833]: E0217 14:00:54.484763 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eaf6f3b9-35a6-4048-94cb-53563475161a-webhook-certs podName:eaf6f3b9-35a6-4048-94cb-53563475161a nodeName:}" failed. No retries permitted until 2026-02-17 14:00:56.484744697 +0000 UTC m=+946.119844130 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/eaf6f3b9-35a6-4048-94cb-53563475161a-webhook-certs") pod "openstack-operator-controller-manager-5856dc4bfc-jh56r" (UID: "eaf6f3b9-35a6-4048-94cb-53563475161a") : secret "webhook-server-cert" not found Feb 17 14:00:54 crc kubenswrapper[4833]: E0217 14:00:54.484839 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eaf6f3b9-35a6-4048-94cb-53563475161a-metrics-certs podName:eaf6f3b9-35a6-4048-94cb-53563475161a nodeName:}" failed. No retries permitted until 2026-02-17 14:00:56.484820799 +0000 UTC m=+946.119920232 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/eaf6f3b9-35a6-4048-94cb-53563475161a-metrics-certs") pod "openstack-operator-controller-manager-5856dc4bfc-jh56r" (UID: "eaf6f3b9-35a6-4048-94cb-53563475161a") : secret "metrics-server-cert" not found Feb 17 14:00:54 crc kubenswrapper[4833]: I0217 14:00:54.484553 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eaf6f3b9-35a6-4048-94cb-53563475161a-metrics-certs\") pod \"openstack-operator-controller-manager-5856dc4bfc-jh56r\" (UID: \"eaf6f3b9-35a6-4048-94cb-53563475161a\") " pod="openstack-operators/openstack-operator-controller-manager-5856dc4bfc-jh56r" Feb 17 14:00:54 crc kubenswrapper[4833]: I0217 14:00:54.557358 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-twx52" event={"ID":"9985098c-6fe1-4a48-8aaf-6ef63f487c41","Type":"ContainerStarted","Data":"f1d14deb5870e43c0a213efa026503723c283ed98106505eac2c72141452d82d"} Feb 17 14:00:54 crc kubenswrapper[4833]: I0217 14:00:54.559680 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bdv7p" event={"ID":"c42495df-6d52-41f8-8b47-764a31284526","Type":"ContainerStarted","Data":"da1159b8387f324e0e6f6075ad887bbcc40527f27e3c7bc9bfdeb89a36471835"} Feb 17 14:00:54 crc kubenswrapper[4833]: E0217 14:00:54.561741 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bdv7p" podUID="c42495df-6d52-41f8-8b47-764a31284526" Feb 17 14:00:54 crc kubenswrapper[4833]: I0217 14:00:54.564402 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-5lrx9" event={"ID":"19d72e8b-fb46-4e69-bcbf-90eae727d08f","Type":"ContainerStarted","Data":"bdddc169749fc64f1a37318e10787b44fe94759a0e2f9ddd3e3a9417e333d9aa"} Feb 17 14:00:54 crc kubenswrapper[4833]: I0217 14:00:54.574682 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-trcbs" event={"ID":"6504a5be-090f-4500-adaa-9ace23ba69f3","Type":"ContainerStarted","Data":"b7c5dc68d0dfe2f9bf41fd9dde10c8be765f0a4421c7f6720598e8a3859f6f84"} Feb 17 14:00:54 crc kubenswrapper[4833]: I0217 14:00:54.576430 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-bffkj" event={"ID":"6768cf80-4466-4e34-b1f5-fd7b4728b6ff","Type":"ContainerStarted","Data":"861df3ccb315c8a634db4e1824826cdc70a31cf9851b19e10759a67af870484d"} Feb 17 14:00:54 crc kubenswrapper[4833]: E0217 14:00:54.578053 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:8d65a2becf279bb8b6b1a09e273d9a2cb1ff41f85bc42ef2e4d573cbb8cbac89\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-bffkj" podUID="6768cf80-4466-4e34-b1f5-fd7b4728b6ff" Feb 17 14:00:54 crc kubenswrapper[4833]: I0217 14:00:54.579819 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-dt8tb" event={"ID":"773b4953-21e2-42e2-b438-29b340f453ad","Type":"ContainerStarted","Data":"bc542cbc6254cd49dcd1ca563ad48ec9aa6dd29de578e40a4c1e605c48a35547"} Feb 17 14:00:54 crc kubenswrapper[4833]: I0217 14:00:54.581718 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-6sx42" event={"ID":"46d10b37-a41e-4063-8a7c-d1f8bf36847f","Type":"ContainerStarted","Data":"00889bb5f5104de9e43ca682ce1867374aa1fca0774a352cab93d1364536bc60"} Feb 17 14:00:54 crc kubenswrapper[4833]: E0217 14:00:54.582914 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:4b10e23983c3ec518c35aeabb33ac228063e56c81b4d7a100c5d91139ad7d7fc\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-6sx42" podUID="46d10b37-a41e-4063-8a7c-d1f8bf36847f" Feb 17 14:00:54 crc kubenswrapper[4833]: I0217 14:00:54.583365 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-85c99d655-s4gk9" event={"ID":"c35ceddd-0211-4828-a16e-4ff8d9d9dcd1","Type":"ContainerStarted","Data":"fbb6c9537b7f6815b2a0d1d1be3c96ca1d56e8be274674f081264a7d09498ec6"} Feb 17 14:00:54 crc kubenswrapper[4833]: I0217 14:00:54.584758 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-kst2n" event={"ID":"22eaba8f-f822-42a0-a01f-4b1bdd5c0728","Type":"ContainerStarted","Data":"d3c47671e09d5609f5d4d2c7ca6463a6bdbe1243b9a18599499f16e2b79f61cf"} Feb 17 14:00:54 crc kubenswrapper[4833]: I0217 14:00:54.589383 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-hdxb7" event={"ID":"21fdadaa-0128-4daa-9dd8-2d4f7c600c99","Type":"ContainerStarted","Data":"13aeb34b4873731390825a9238844fec51e72aee3e12ea2cf29d159c0750d783"} Feb 17 14:00:54 crc kubenswrapper[4833]: I0217 14:00:54.597723 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-rq2hb" event={"ID":"056f0c10-839f-4bdf-9b0e-2c6eabb209a8","Type":"ContainerStarted","Data":"223b62fe4ffb3314e0ae03ea585db5d98eb4555d437b3681b32bace349262ea6"} Feb 17 14:00:54 crc kubenswrapper[4833]: I0217 14:00:54.608423 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-7pspc" event={"ID":"c55ae282-60fe-4b05-9bee-5539b9026800","Type":"ContainerStarted","Data":"065ea8c0d8aad89ddaeb30a45ae7394639ab994b6dad783311ea3931022affd8"} Feb 17 14:00:54 crc kubenswrapper[4833]: I0217 14:00:54.612957 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-pc29h" event={"ID":"3c850072-fbc2-4f6a-a59c-51498bc5aef2","Type":"ContainerStarted","Data":"40d43e1e74f50210286b5e8864ec9cce4cea9c7eaff2c16ebf71767c3b5415fc"} Feb 17 14:00:54 crc kubenswrapper[4833]: E0217 14:00:54.614313 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:015f7f2d8b5afc85e51dd3b2e02a4cfb8294b543437315b291006d2416764db9\\\"\"" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-7pspc" podUID="c55ae282-60fe-4b05-9bee-5539b9026800" Feb 17 14:00:54 crc kubenswrapper[4833]: I0217 14:00:54.615261 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-kkrrz" event={"ID":"2acb502c-ae3f-4b68-b2c8-aaebcddc54d8","Type":"ContainerStarted","Data":"68e8fc406595c56addaeaf762ca4eba64b5b1ba99b558dc3c627e6f65a572d67"} Feb 17 14:00:54 crc kubenswrapper[4833]: I0217 14:00:54.616587 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-brrjj" event={"ID":"92da7367-c0eb-4ad3-a2d9-aebe2540cb11","Type":"ContainerStarted","Data":"fdc9c3a782488f720a2f57113a74ccd0f7bf893f95b8fc05dc65833c3b987f3c"} Feb 17 14:00:54 crc kubenswrapper[4833]: I0217 14:00:54.617497 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-59669cd6b8-hrbgp" event={"ID":"a40de3a4-5ce4-4c9b-a022-acf3aeb9b852","Type":"ContainerStarted","Data":"846c34a84d3b1bc98c190d0654914eef2dbe22d0acca4caca2ca767b1ca96d41"} Feb 17 14:00:54 crc kubenswrapper[4833]: I0217 14:00:54.618881 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-n4rt9" event={"ID":"6f6fed97-8c9b-4262-962a-912fe2777ffc","Type":"ContainerStarted","Data":"3854917bb459d110d4414c2198bbf01e8e9ea67cdf473e9a9e0fd7e3138d1457"} Feb 17 14:00:54 crc kubenswrapper[4833]: E0217 14:00:54.620859 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f9b2e00617c7f219932ea0d5e2bb795cc4361a335a72743077948d8108695c27\\\"\"" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-n4rt9" podUID="6f6fed97-8c9b-4262-962a-912fe2777ffc" Feb 17 14:00:55 crc kubenswrapper[4833]: I0217 14:00:55.523369 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ed6e734c-3eb1-47d0-ad82-aa5c934da55a-cert\") pod \"infra-operator-controller-manager-66d6b5f488-87mt8\" (UID: \"ed6e734c-3eb1-47d0-ad82-aa5c934da55a\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-87mt8" Feb 17 14:00:55 crc kubenswrapper[4833]: E0217 14:00:55.523613 4833 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 14:00:55 crc kubenswrapper[4833]: E0217 14:00:55.523674 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed6e734c-3eb1-47d0-ad82-aa5c934da55a-cert podName:ed6e734c-3eb1-47d0-ad82-aa5c934da55a nodeName:}" failed. No retries permitted until 2026-02-17 14:00:59.523654443 +0000 UTC m=+949.158753876 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ed6e734c-3eb1-47d0-ad82-aa5c934da55a-cert") pod "infra-operator-controller-manager-66d6b5f488-87mt8" (UID: "ed6e734c-3eb1-47d0-ad82-aa5c934da55a") : secret "infra-operator-webhook-server-cert" not found Feb 17 14:00:55 crc kubenswrapper[4833]: E0217 14:00:55.629209 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:015f7f2d8b5afc85e51dd3b2e02a4cfb8294b543437315b291006d2416764db9\\\"\"" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-7pspc" podUID="c55ae282-60fe-4b05-9bee-5539b9026800" Feb 17 14:00:55 crc kubenswrapper[4833]: E0217 14:00:55.635275 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f9b2e00617c7f219932ea0d5e2bb795cc4361a335a72743077948d8108695c27\\\"\"" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-n4rt9" podUID="6f6fed97-8c9b-4262-962a-912fe2777ffc" Feb 17 14:00:55 crc kubenswrapper[4833]: E0217 14:00:55.635339 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:4b10e23983c3ec518c35aeabb33ac228063e56c81b4d7a100c5d91139ad7d7fc\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-6sx42" podUID="46d10b37-a41e-4063-8a7c-d1f8bf36847f" Feb 17 14:00:55 crc kubenswrapper[4833]: E0217 14:00:55.635383 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:8d65a2becf279bb8b6b1a09e273d9a2cb1ff41f85bc42ef2e4d573cbb8cbac89\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-bffkj" podUID="6768cf80-4466-4e34-b1f5-fd7b4728b6ff" Feb 17 14:00:55 crc kubenswrapper[4833]: E0217 14:00:55.636273 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bdv7p" podUID="c42495df-6d52-41f8-8b47-764a31284526" Feb 17 14:00:55 crc kubenswrapper[4833]: I0217 14:00:55.927848 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/032e7e1b-b6e8-4a69-905a-cd01e516f155-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-tjdm7\" (UID: \"032e7e1b-b6e8-4a69-905a-cd01e516f155\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-tjdm7" Feb 17 14:00:55 crc kubenswrapper[4833]: E0217 14:00:55.928062 4833 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 14:00:55 crc kubenswrapper[4833]: E0217 14:00:55.928109 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/032e7e1b-b6e8-4a69-905a-cd01e516f155-cert podName:032e7e1b-b6e8-4a69-905a-cd01e516f155 nodeName:}" failed. No retries permitted until 2026-02-17 14:00:59.92809621 +0000 UTC m=+949.563195643 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/032e7e1b-b6e8-4a69-905a-cd01e516f155-cert") pod "openstack-baremetal-operator-controller-manager-c5677dc5d-tjdm7" (UID: "032e7e1b-b6e8-4a69-905a-cd01e516f155") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 14:00:56 crc kubenswrapper[4833]: I0217 14:00:56.540655 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/eaf6f3b9-35a6-4048-94cb-53563475161a-webhook-certs\") pod \"openstack-operator-controller-manager-5856dc4bfc-jh56r\" (UID: \"eaf6f3b9-35a6-4048-94cb-53563475161a\") " pod="openstack-operators/openstack-operator-controller-manager-5856dc4bfc-jh56r" Feb 17 14:00:56 crc kubenswrapper[4833]: I0217 14:00:56.540778 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eaf6f3b9-35a6-4048-94cb-53563475161a-metrics-certs\") pod \"openstack-operator-controller-manager-5856dc4bfc-jh56r\" (UID: \"eaf6f3b9-35a6-4048-94cb-53563475161a\") " pod="openstack-operators/openstack-operator-controller-manager-5856dc4bfc-jh56r" Feb 17 14:00:56 crc kubenswrapper[4833]: E0217 14:00:56.540891 4833 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 14:00:56 crc kubenswrapper[4833]: E0217 14:00:56.540921 4833 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 17 14:00:56 crc kubenswrapper[4833]: E0217 14:00:56.540980 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eaf6f3b9-35a6-4048-94cb-53563475161a-webhook-certs podName:eaf6f3b9-35a6-4048-94cb-53563475161a nodeName:}" failed. No retries permitted until 2026-02-17 14:01:00.540958287 +0000 UTC m=+950.176057790 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/eaf6f3b9-35a6-4048-94cb-53563475161a-webhook-certs") pod "openstack-operator-controller-manager-5856dc4bfc-jh56r" (UID: "eaf6f3b9-35a6-4048-94cb-53563475161a") : secret "webhook-server-cert" not found Feb 17 14:00:56 crc kubenswrapper[4833]: E0217 14:00:56.541005 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eaf6f3b9-35a6-4048-94cb-53563475161a-metrics-certs podName:eaf6f3b9-35a6-4048-94cb-53563475161a nodeName:}" failed. No retries permitted until 2026-02-17 14:01:00.540995648 +0000 UTC m=+950.176095181 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/eaf6f3b9-35a6-4048-94cb-53563475161a-metrics-certs") pod "openstack-operator-controller-manager-5856dc4bfc-jh56r" (UID: "eaf6f3b9-35a6-4048-94cb-53563475161a") : secret "metrics-server-cert" not found Feb 17 14:00:59 crc kubenswrapper[4833]: I0217 14:00:59.613392 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ed6e734c-3eb1-47d0-ad82-aa5c934da55a-cert\") pod \"infra-operator-controller-manager-66d6b5f488-87mt8\" (UID: \"ed6e734c-3eb1-47d0-ad82-aa5c934da55a\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-87mt8" Feb 17 14:00:59 crc kubenswrapper[4833]: E0217 14:00:59.613593 4833 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 14:00:59 crc kubenswrapper[4833]: E0217 14:00:59.613795 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed6e734c-3eb1-47d0-ad82-aa5c934da55a-cert podName:ed6e734c-3eb1-47d0-ad82-aa5c934da55a nodeName:}" failed. No retries permitted until 2026-02-17 14:01:07.613773749 +0000 UTC m=+957.248873192 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ed6e734c-3eb1-47d0-ad82-aa5c934da55a-cert") pod "infra-operator-controller-manager-66d6b5f488-87mt8" (UID: "ed6e734c-3eb1-47d0-ad82-aa5c934da55a") : secret "infra-operator-webhook-server-cert" not found Feb 17 14:01:00 crc kubenswrapper[4833]: I0217 14:01:00.018919 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/032e7e1b-b6e8-4a69-905a-cd01e516f155-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-tjdm7\" (UID: \"032e7e1b-b6e8-4a69-905a-cd01e516f155\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-tjdm7" Feb 17 14:01:00 crc kubenswrapper[4833]: E0217 14:01:00.019170 4833 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 14:01:00 crc kubenswrapper[4833]: E0217 14:01:00.019259 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/032e7e1b-b6e8-4a69-905a-cd01e516f155-cert podName:032e7e1b-b6e8-4a69-905a-cd01e516f155 nodeName:}" failed. No retries permitted until 2026-02-17 14:01:08.019241525 +0000 UTC m=+957.654340958 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/032e7e1b-b6e8-4a69-905a-cd01e516f155-cert") pod "openstack-baremetal-operator-controller-manager-c5677dc5d-tjdm7" (UID: "032e7e1b-b6e8-4a69-905a-cd01e516f155") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 14:01:00 crc kubenswrapper[4833]: I0217 14:01:00.629539 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eaf6f3b9-35a6-4048-94cb-53563475161a-metrics-certs\") pod \"openstack-operator-controller-manager-5856dc4bfc-jh56r\" (UID: \"eaf6f3b9-35a6-4048-94cb-53563475161a\") " pod="openstack-operators/openstack-operator-controller-manager-5856dc4bfc-jh56r" Feb 17 14:01:00 crc kubenswrapper[4833]: I0217 14:01:00.629843 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/eaf6f3b9-35a6-4048-94cb-53563475161a-webhook-certs\") pod \"openstack-operator-controller-manager-5856dc4bfc-jh56r\" (UID: \"eaf6f3b9-35a6-4048-94cb-53563475161a\") " pod="openstack-operators/openstack-operator-controller-manager-5856dc4bfc-jh56r" Feb 17 14:01:00 crc kubenswrapper[4833]: E0217 14:01:00.629734 4833 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 17 14:01:00 crc kubenswrapper[4833]: E0217 14:01:00.630083 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eaf6f3b9-35a6-4048-94cb-53563475161a-metrics-certs podName:eaf6f3b9-35a6-4048-94cb-53563475161a nodeName:}" failed. No retries permitted until 2026-02-17 14:01:08.630068372 +0000 UTC m=+958.265167805 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/eaf6f3b9-35a6-4048-94cb-53563475161a-metrics-certs") pod "openstack-operator-controller-manager-5856dc4bfc-jh56r" (UID: "eaf6f3b9-35a6-4048-94cb-53563475161a") : secret "metrics-server-cert" not found Feb 17 14:01:00 crc kubenswrapper[4833]: E0217 14:01:00.630020 4833 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 14:01:00 crc kubenswrapper[4833]: E0217 14:01:00.630408 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eaf6f3b9-35a6-4048-94cb-53563475161a-webhook-certs podName:eaf6f3b9-35a6-4048-94cb-53563475161a nodeName:}" failed. No retries permitted until 2026-02-17 14:01:08.630400221 +0000 UTC m=+958.265499654 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/eaf6f3b9-35a6-4048-94cb-53563475161a-webhook-certs") pod "openstack-operator-controller-manager-5856dc4bfc-jh56r" (UID: "eaf6f3b9-35a6-4048-94cb-53563475161a") : secret "webhook-server-cert" not found Feb 17 14:01:06 crc kubenswrapper[4833]: I0217 14:01:06.042509 4833 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 14:01:07 crc kubenswrapper[4833]: I0217 14:01:07.665289 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ed6e734c-3eb1-47d0-ad82-aa5c934da55a-cert\") pod \"infra-operator-controller-manager-66d6b5f488-87mt8\" (UID: \"ed6e734c-3eb1-47d0-ad82-aa5c934da55a\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-87mt8" Feb 17 14:01:07 crc kubenswrapper[4833]: E0217 14:01:07.665538 4833 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 14:01:07 crc kubenswrapper[4833]: E0217 14:01:07.665872 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed6e734c-3eb1-47d0-ad82-aa5c934da55a-cert podName:ed6e734c-3eb1-47d0-ad82-aa5c934da55a nodeName:}" failed. No retries permitted until 2026-02-17 14:01:23.665840908 +0000 UTC m=+973.300940431 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ed6e734c-3eb1-47d0-ad82-aa5c934da55a-cert") pod "infra-operator-controller-manager-66d6b5f488-87mt8" (UID: "ed6e734c-3eb1-47d0-ad82-aa5c934da55a") : secret "infra-operator-webhook-server-cert" not found Feb 17 14:01:08 crc kubenswrapper[4833]: I0217 14:01:08.071603 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/032e7e1b-b6e8-4a69-905a-cd01e516f155-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-tjdm7\" (UID: \"032e7e1b-b6e8-4a69-905a-cd01e516f155\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-tjdm7" Feb 17 14:01:08 crc kubenswrapper[4833]: E0217 14:01:08.071852 4833 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 14:01:08 crc kubenswrapper[4833]: E0217 14:01:08.072063 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/032e7e1b-b6e8-4a69-905a-cd01e516f155-cert podName:032e7e1b-b6e8-4a69-905a-cd01e516f155 nodeName:}" failed. No retries permitted until 2026-02-17 14:01:24.072045755 +0000 UTC m=+973.707145188 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/032e7e1b-b6e8-4a69-905a-cd01e516f155-cert") pod "openstack-baremetal-operator-controller-manager-c5677dc5d-tjdm7" (UID: "032e7e1b-b6e8-4a69-905a-cd01e516f155") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 14:01:08 crc kubenswrapper[4833]: I0217 14:01:08.681553 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eaf6f3b9-35a6-4048-94cb-53563475161a-metrics-certs\") pod \"openstack-operator-controller-manager-5856dc4bfc-jh56r\" (UID: \"eaf6f3b9-35a6-4048-94cb-53563475161a\") " pod="openstack-operators/openstack-operator-controller-manager-5856dc4bfc-jh56r" Feb 17 14:01:08 crc kubenswrapper[4833]: I0217 14:01:08.681739 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/eaf6f3b9-35a6-4048-94cb-53563475161a-webhook-certs\") pod \"openstack-operator-controller-manager-5856dc4bfc-jh56r\" (UID: \"eaf6f3b9-35a6-4048-94cb-53563475161a\") " pod="openstack-operators/openstack-operator-controller-manager-5856dc4bfc-jh56r" Feb 17 14:01:08 crc kubenswrapper[4833]: E0217 14:01:08.681768 4833 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 17 14:01:08 crc kubenswrapper[4833]: E0217 14:01:08.681870 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eaf6f3b9-35a6-4048-94cb-53563475161a-metrics-certs podName:eaf6f3b9-35a6-4048-94cb-53563475161a nodeName:}" failed. No retries permitted until 2026-02-17 14:01:24.681842825 +0000 UTC m=+974.316942268 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/eaf6f3b9-35a6-4048-94cb-53563475161a-metrics-certs") pod "openstack-operator-controller-manager-5856dc4bfc-jh56r" (UID: "eaf6f3b9-35a6-4048-94cb-53563475161a") : secret "metrics-server-cert" not found Feb 17 14:01:08 crc kubenswrapper[4833]: E0217 14:01:08.681999 4833 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 14:01:08 crc kubenswrapper[4833]: E0217 14:01:08.682179 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eaf6f3b9-35a6-4048-94cb-53563475161a-webhook-certs podName:eaf6f3b9-35a6-4048-94cb-53563475161a nodeName:}" failed. No retries permitted until 2026-02-17 14:01:24.682157364 +0000 UTC m=+974.317256797 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/eaf6f3b9-35a6-4048-94cb-53563475161a-webhook-certs") pod "openstack-operator-controller-manager-5856dc4bfc-jh56r" (UID: "eaf6f3b9-35a6-4048-94cb-53563475161a") : secret "webhook-server-cert" not found Feb 17 14:01:12 crc kubenswrapper[4833]: E0217 14:01:12.539111 4833 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.220:5001/openstack-k8s-operators/watcher-operator:205feca93c544be6b9b4f78fb631537dc3a19ff8" Feb 17 14:01:12 crc kubenswrapper[4833]: E0217 14:01:12.539646 4833 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.220:5001/openstack-k8s-operators/watcher-operator:205feca93c544be6b9b4f78fb631537dc3a19ff8" Feb 17 14:01:12 crc kubenswrapper[4833]: E0217 14:01:12.539815 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.220:5001/openstack-k8s-operators/watcher-operator:205feca93c544be6b9b4f78fb631537dc3a19ff8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pggtw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-59669cd6b8-hrbgp_openstack-operators(a40de3a4-5ce4-4c9b-a022-acf3aeb9b852): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 14:01:12 crc kubenswrapper[4833]: E0217 14:01:12.541877 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-59669cd6b8-hrbgp" podUID="a40de3a4-5ce4-4c9b-a022-acf3aeb9b852" Feb 17 14:01:12 crc kubenswrapper[4833]: E0217 14:01:12.762529 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.220:5001/openstack-k8s-operators/watcher-operator:205feca93c544be6b9b4f78fb631537dc3a19ff8\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-59669cd6b8-hrbgp" podUID="a40de3a4-5ce4-4c9b-a022-acf3aeb9b852" Feb 17 14:01:13 crc kubenswrapper[4833]: E0217 14:01:13.339434 4833 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:9cb0b42ba1836ba4320a0a4660bfdeddea8c0685be379c0000dafb16398f4469" Feb 17 14:01:13 crc kubenswrapper[4833]: E0217 14:01:13.339617 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:9cb0b42ba1836ba4320a0a4660bfdeddea8c0685be379c0000dafb16398f4469,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2l8m5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-6c78d668d5-pc29h_openstack-operators(3c850072-fbc2-4f6a-a59c-51498bc5aef2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 14:01:13 crc kubenswrapper[4833]: E0217 14:01:13.340947 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-pc29h" podUID="3c850072-fbc2-4f6a-a59c-51498bc5aef2" Feb 17 14:01:13 crc kubenswrapper[4833]: E0217 14:01:13.769631 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:9cb0b42ba1836ba4320a0a4660bfdeddea8c0685be379c0000dafb16398f4469\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-pc29h" podUID="3c850072-fbc2-4f6a-a59c-51498bc5aef2" Feb 17 14:01:13 crc kubenswrapper[4833]: E0217 14:01:13.931180 4833 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:ab8e8207abec9cf5da7afded75ea76d1c3d2b9ab0f8e3124f518651e38f3123c" Feb 17 14:01:13 crc kubenswrapper[4833]: E0217 14:01:13.931598 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:ab8e8207abec9cf5da7afded75ea76d1c3d2b9ab0f8e3124f518651e38f3123c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fhqj4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-5ddd85db87-kst2n_openstack-operators(22eaba8f-f822-42a0-a01f-4b1bdd5c0728): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 14:01:13 crc kubenswrapper[4833]: E0217 14:01:13.933448 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-kst2n" podUID="22eaba8f-f822-42a0-a01f-4b1bdd5c0728" Feb 17 14:01:14 crc kubenswrapper[4833]: E0217 14:01:14.775898 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:ab8e8207abec9cf5da7afded75ea76d1c3d2b9ab0f8e3124f518651e38f3123c\\\"\"" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-kst2n" podUID="22eaba8f-f822-42a0-a01f-4b1bdd5c0728" Feb 17 14:01:18 crc kubenswrapper[4833]: I0217 14:01:18.799565 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-rq2hb" event={"ID":"056f0c10-839f-4bdf-9b0e-2c6eabb209a8","Type":"ContainerStarted","Data":"ff7c96921531daddab2772d65d661b1ac875013b6ab6cbaf6f26e65e673d191f"} Feb 17 14:01:18 crc kubenswrapper[4833]: I0217 14:01:18.800988 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-rq2hb" Feb 17 14:01:18 crc kubenswrapper[4833]: I0217 14:01:18.802419 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-n4rt9" event={"ID":"6f6fed97-8c9b-4262-962a-912fe2777ffc","Type":"ContainerStarted","Data":"4495a771545b7606af9bda8911e188359a4032c8dc5bbd28e6b045665c36fc60"} Feb 17 14:01:18 crc kubenswrapper[4833]: I0217 14:01:18.802812 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-n4rt9" Feb 17 14:01:18 crc kubenswrapper[4833]: I0217 14:01:18.804210 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-6sx42" event={"ID":"46d10b37-a41e-4063-8a7c-d1f8bf36847f","Type":"ContainerStarted","Data":"6b1fcd1307ade00552041b2e48f5ba9e43a7c717522956da58626909b4649e1a"} Feb 17 14:01:18 crc kubenswrapper[4833]: I0217 14:01:18.804587 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-6sx42" Feb 17 14:01:18 crc kubenswrapper[4833]: I0217 14:01:18.806250 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-zqqxg" event={"ID":"35e45f6b-d546-4818-94a1-c8490e83a786","Type":"ContainerStarted","Data":"25a46022fb6b233742ca7c4546c408bfccbcbef01a6cc40786c32368b8112037"} Feb 17 14:01:18 crc kubenswrapper[4833]: I0217 14:01:18.806657 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-zqqxg" Feb 17 14:01:18 crc kubenswrapper[4833]: I0217 14:01:18.809256 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bdv7p" event={"ID":"c42495df-6d52-41f8-8b47-764a31284526","Type":"ContainerStarted","Data":"316f537ee1dc23554ed439de385b055e24e6bc95ffc4a878e88bc7d18ab76d36"} Feb 17 14:01:18 crc kubenswrapper[4833]: I0217 14:01:18.813236 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-85c99d655-s4gk9" event={"ID":"c35ceddd-0211-4828-a16e-4ff8d9d9dcd1","Type":"ContainerStarted","Data":"dbc2e0c14eeab0868e5572a52617a816db5e5c88db27c9451c7b310cda0b8e6b"} Feb 17 14:01:18 crc kubenswrapper[4833]: I0217 14:01:18.813591 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-85c99d655-s4gk9" Feb 17 14:01:18 crc kubenswrapper[4833]: I0217 14:01:18.815895 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-7pspc" event={"ID":"c55ae282-60fe-4b05-9bee-5539b9026800","Type":"ContainerStarted","Data":"2396b97a7d349c9660411799a89f33bc071b309a4c6910985f3ed935eda3bb66"} Feb 17 14:01:18 crc kubenswrapper[4833]: I0217 14:01:18.816105 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-7pspc" Feb 17 14:01:18 crc kubenswrapper[4833]: I0217 14:01:18.818666 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-twx52" event={"ID":"9985098c-6fe1-4a48-8aaf-6ef63f487c41","Type":"ContainerStarted","Data":"d6dc2b485b0fed1feea91f96564851ac4c893589a0dcefa0998978389aac5d80"} Feb 17 14:01:18 crc kubenswrapper[4833]: I0217 14:01:18.818814 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-twx52" Feb 17 14:01:18 crc kubenswrapper[4833]: I0217 14:01:18.821986 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-hdxb7" event={"ID":"21fdadaa-0128-4daa-9dd8-2d4f7c600c99","Type":"ContainerStarted","Data":"4bde4f0668410e593b42cf5493285fee2f74ad73d221cfbe2b3f404a203adefe"} Feb 17 14:01:18 crc kubenswrapper[4833]: I0217 14:01:18.822137 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-hdxb7" Feb 17 14:01:18 crc kubenswrapper[4833]: I0217 14:01:18.823670 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-bffkj" event={"ID":"6768cf80-4466-4e34-b1f5-fd7b4728b6ff","Type":"ContainerStarted","Data":"98e1614104b9f990fd8e1b52fdf02f7ef6b2f24fb17833ad91f7b5b0c9d89441"} Feb 17 14:01:18 crc kubenswrapper[4833]: I0217 14:01:18.823921 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-bffkj" Feb 17 14:01:18 crc kubenswrapper[4833]: I0217 14:01:18.825044 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-kkrrz" event={"ID":"2acb502c-ae3f-4b68-b2c8-aaebcddc54d8","Type":"ContainerStarted","Data":"5af4c93d2f1e098d6d78812efd1255ceb527b53577a48b011ae14f164c24c5b3"} Feb 17 14:01:18 crc kubenswrapper[4833]: I0217 14:01:18.825537 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-kkrrz" Feb 17 14:01:18 crc kubenswrapper[4833]: I0217 14:01:18.828154 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-5lrx9" event={"ID":"19d72e8b-fb46-4e69-bcbf-90eae727d08f","Type":"ContainerStarted","Data":"c677ff0ea2813c9b20cd707ff78c3b0cd80344ffafe56c37fc6b878f51802685"} Feb 17 14:01:18 crc kubenswrapper[4833]: I0217 14:01:18.828302 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-5lrx9" Feb 17 14:01:18 crc kubenswrapper[4833]: I0217 14:01:18.829934 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-9595d6797-798bv" event={"ID":"d98ee2cd-c429-474d-a6a8-a7f9bc1ed559","Type":"ContainerStarted","Data":"f514c1be027f0ad955b6a7082a7c3111641f26a5ff3e13418ff075f88b3f846d"} Feb 17 14:01:18 crc kubenswrapper[4833]: I0217 14:01:18.830070 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-9595d6797-798bv" Feb 17 14:01:18 crc kubenswrapper[4833]: I0217 14:01:18.831806 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-brrjj" event={"ID":"92da7367-c0eb-4ad3-a2d9-aebe2540cb11","Type":"ContainerStarted","Data":"081ee19dd80cebaa1833e47c1c993a54fd3bc59fefbffb326a5c6899f1ddecfc"} Feb 17 14:01:18 crc kubenswrapper[4833]: I0217 14:01:18.831928 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-brrjj" Feb 17 14:01:18 crc kubenswrapper[4833]: I0217 14:01:18.833613 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-jjchm" event={"ID":"df534d1f-b343-4090-b8ad-114864ea82ec","Type":"ContainerStarted","Data":"f30b5b882839f6f058349dc3997623e26e8be5738fc0fc1aebde07935af00e05"} Feb 17 14:01:18 crc kubenswrapper[4833]: I0217 14:01:18.833769 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-jjchm" Feb 17 14:01:18 crc kubenswrapper[4833]: I0217 14:01:18.839831 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-trcbs" event={"ID":"6504a5be-090f-4500-adaa-9ace23ba69f3","Type":"ContainerStarted","Data":"35e0ab664125968400f54d07ba55a32a7e1513242ae86c82514bc8d426083779"} Feb 17 14:01:18 crc kubenswrapper[4833]: I0217 14:01:18.839914 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-trcbs" Feb 17 14:01:18 crc kubenswrapper[4833]: I0217 14:01:18.842112 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-dt8tb" event={"ID":"773b4953-21e2-42e2-b438-29b340f453ad","Type":"ContainerStarted","Data":"bdfb3013a913beae8eb2918f1317b970c1032b401c9038ab6857ef9c8b8496f7"} Feb 17 14:01:18 crc kubenswrapper[4833]: I0217 14:01:18.842813 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-dt8tb" Feb 17 14:01:18 crc kubenswrapper[4833]: I0217 14:01:18.922217 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-rq2hb" podStartSLOduration=6.45298309 podStartE2EDuration="27.922202286s" podCreationTimestamp="2026-02-17 14:00:51 +0000 UTC" firstStartedPulling="2026-02-17 14:00:54.157028219 +0000 UTC m=+943.792127652" lastFinishedPulling="2026-02-17 14:01:15.626247415 +0000 UTC m=+965.261346848" observedRunningTime="2026-02-17 14:01:18.920132187 +0000 UTC m=+968.555231620" watchObservedRunningTime="2026-02-17 14:01:18.922202286 +0000 UTC m=+968.557301719" Feb 17 14:01:19 crc kubenswrapper[4833]: I0217 14:01:19.151422 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-twx52" podStartSLOduration=8.993255916 podStartE2EDuration="28.151400865s" podCreationTimestamp="2026-02-17 14:00:51 +0000 UTC" firstStartedPulling="2026-02-17 14:00:54.152802309 +0000 UTC m=+943.787901742" lastFinishedPulling="2026-02-17 14:01:13.310947258 +0000 UTC m=+962.946046691" observedRunningTime="2026-02-17 14:01:19.044719876 +0000 UTC m=+968.679819309" watchObservedRunningTime="2026-02-17 14:01:19.151400865 +0000 UTC m=+968.786500668" Feb 17 14:01:19 crc kubenswrapper[4833]: I0217 14:01:19.226375 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-jjchm" podStartSLOduration=7.618297206 podStartE2EDuration="28.226359774s" podCreationTimestamp="2026-02-17 14:00:51 +0000 UTC" firstStartedPulling="2026-02-17 14:00:52.701447779 +0000 UTC m=+942.336547212" lastFinishedPulling="2026-02-17 14:01:13.309510347 +0000 UTC m=+962.944609780" observedRunningTime="2026-02-17 14:01:19.224128041 +0000 UTC m=+968.859227484" watchObservedRunningTime="2026-02-17 14:01:19.226359774 +0000 UTC m=+968.861459197" Feb 17 14:01:19 crc kubenswrapper[4833]: I0217 14:01:19.227561 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-n4rt9" podStartSLOduration=3.72988125 podStartE2EDuration="27.227556368s" podCreationTimestamp="2026-02-17 14:00:52 +0000 UTC" firstStartedPulling="2026-02-17 14:00:54.298851327 +0000 UTC m=+943.933950760" lastFinishedPulling="2026-02-17 14:01:17.796526445 +0000 UTC m=+967.431625878" observedRunningTime="2026-02-17 14:01:19.157004305 +0000 UTC m=+968.792103738" watchObservedRunningTime="2026-02-17 14:01:19.227556368 +0000 UTC m=+968.862655801" Feb 17 14:01:19 crc kubenswrapper[4833]: I0217 14:01:19.330771 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-hdxb7" podStartSLOduration=7.990267979 podStartE2EDuration="28.330746149s" podCreationTimestamp="2026-02-17 14:00:51 +0000 UTC" firstStartedPulling="2026-02-17 14:00:53.56838682 +0000 UTC m=+943.203486253" lastFinishedPulling="2026-02-17 14:01:13.90886499 +0000 UTC m=+963.543964423" observedRunningTime="2026-02-17 14:01:19.262862711 +0000 UTC m=+968.897962144" watchObservedRunningTime="2026-02-17 14:01:19.330746149 +0000 UTC m=+968.965845592" Feb 17 14:01:19 crc kubenswrapper[4833]: I0217 14:01:19.332600 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-9595d6797-798bv" podStartSLOduration=7.848895175 podStartE2EDuration="28.332590662s" podCreationTimestamp="2026-02-17 14:00:51 +0000 UTC" firstStartedPulling="2026-02-17 14:00:53.426307565 +0000 UTC m=+943.061406998" lastFinishedPulling="2026-02-17 14:01:13.910003052 +0000 UTC m=+963.545102485" observedRunningTime="2026-02-17 14:01:19.319346895 +0000 UTC m=+968.954446348" watchObservedRunningTime="2026-02-17 14:01:19.332590662 +0000 UTC m=+968.967690115" Feb 17 14:01:19 crc kubenswrapper[4833]: I0217 14:01:19.388491 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-kkrrz" podStartSLOduration=8.741382012999999 podStartE2EDuration="28.388473289s" podCreationTimestamp="2026-02-17 14:00:51 +0000 UTC" firstStartedPulling="2026-02-17 14:00:53.662344759 +0000 UTC m=+943.297444192" lastFinishedPulling="2026-02-17 14:01:13.309436035 +0000 UTC m=+962.944535468" observedRunningTime="2026-02-17 14:01:19.387505901 +0000 UTC m=+969.022605344" watchObservedRunningTime="2026-02-17 14:01:19.388473289 +0000 UTC m=+969.023572722" Feb 17 14:01:19 crc kubenswrapper[4833]: I0217 14:01:19.418743 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-5lrx9" podStartSLOduration=8.343522033 podStartE2EDuration="28.418724078s" podCreationTimestamp="2026-02-17 14:00:51 +0000 UTC" firstStartedPulling="2026-02-17 14:00:53.834969722 +0000 UTC m=+943.470069155" lastFinishedPulling="2026-02-17 14:01:13.910171767 +0000 UTC m=+963.545271200" observedRunningTime="2026-02-17 14:01:19.417198305 +0000 UTC m=+969.052297738" watchObservedRunningTime="2026-02-17 14:01:19.418724078 +0000 UTC m=+969.053823531" Feb 17 14:01:19 crc kubenswrapper[4833]: I0217 14:01:19.451309 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-brrjj" podStartSLOduration=8.35045722 podStartE2EDuration="28.451293383s" podCreationTimestamp="2026-02-17 14:00:51 +0000 UTC" firstStartedPulling="2026-02-17 14:00:53.808088788 +0000 UTC m=+943.443188221" lastFinishedPulling="2026-02-17 14:01:13.908924951 +0000 UTC m=+963.544024384" observedRunningTime="2026-02-17 14:01:19.450000986 +0000 UTC m=+969.085100429" watchObservedRunningTime="2026-02-17 14:01:19.451293383 +0000 UTC m=+969.086392816" Feb 17 14:01:19 crc kubenswrapper[4833]: I0217 14:01:19.480474 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-dt8tb" podStartSLOduration=9.291542018 podStartE2EDuration="28.480458321s" podCreationTimestamp="2026-02-17 14:00:51 +0000 UTC" firstStartedPulling="2026-02-17 14:00:54.12116755 +0000 UTC m=+943.756266983" lastFinishedPulling="2026-02-17 14:01:13.310083863 +0000 UTC m=+962.945183286" observedRunningTime="2026-02-17 14:01:19.473978717 +0000 UTC m=+969.109078150" watchObservedRunningTime="2026-02-17 14:01:19.480458321 +0000 UTC m=+969.115557754" Feb 17 14:01:19 crc kubenswrapper[4833]: I0217 14:01:19.505963 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-zqqxg" podStartSLOduration=8.004959976 podStartE2EDuration="28.505946545s" podCreationTimestamp="2026-02-17 14:00:51 +0000 UTC" firstStartedPulling="2026-02-17 14:00:53.409172358 +0000 UTC m=+943.044271791" lastFinishedPulling="2026-02-17 14:01:13.910158917 +0000 UTC m=+963.545258360" observedRunningTime="2026-02-17 14:01:19.504686169 +0000 UTC m=+969.139785602" watchObservedRunningTime="2026-02-17 14:01:19.505946545 +0000 UTC m=+969.141045978" Feb 17 14:01:19 crc kubenswrapper[4833]: I0217 14:01:19.533816 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-85c99d655-s4gk9" podStartSLOduration=8.778802776 podStartE2EDuration="28.533801236s" podCreationTimestamp="2026-02-17 14:00:51 +0000 UTC" firstStartedPulling="2026-02-17 14:00:54.155013482 +0000 UTC m=+943.790112915" lastFinishedPulling="2026-02-17 14:01:13.910011942 +0000 UTC m=+963.545111375" observedRunningTime="2026-02-17 14:01:19.532786387 +0000 UTC m=+969.167885810" watchObservedRunningTime="2026-02-17 14:01:19.533801236 +0000 UTC m=+969.168900669" Feb 17 14:01:19 crc kubenswrapper[4833]: I0217 14:01:19.563071 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-bffkj" podStartSLOduration=4.949482047 podStartE2EDuration="28.563032146s" podCreationTimestamp="2026-02-17 14:00:51 +0000 UTC" firstStartedPulling="2026-02-17 14:00:54.161186007 +0000 UTC m=+943.796285440" lastFinishedPulling="2026-02-17 14:01:17.774736106 +0000 UTC m=+967.409835539" observedRunningTime="2026-02-17 14:01:19.559247609 +0000 UTC m=+969.194347042" watchObservedRunningTime="2026-02-17 14:01:19.563032146 +0000 UTC m=+969.198131579" Feb 17 14:01:19 crc kubenswrapper[4833]: I0217 14:01:19.579550 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-6sx42" podStartSLOduration=5.07984555 podStartE2EDuration="28.579530525s" podCreationTimestamp="2026-02-17 14:00:51 +0000 UTC" firstStartedPulling="2026-02-17 14:00:54.297270122 +0000 UTC m=+943.932369545" lastFinishedPulling="2026-02-17 14:01:17.796955087 +0000 UTC m=+967.432054520" observedRunningTime="2026-02-17 14:01:19.574848962 +0000 UTC m=+969.209948395" watchObservedRunningTime="2026-02-17 14:01:19.579530525 +0000 UTC m=+969.214629958" Feb 17 14:01:19 crc kubenswrapper[4833]: I0217 14:01:19.597254 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bdv7p" podStartSLOduration=4.035704906 podStartE2EDuration="27.597237928s" podCreationTimestamp="2026-02-17 14:00:52 +0000 UTC" firstStartedPulling="2026-02-17 14:00:54.2859189 +0000 UTC m=+943.921018333" lastFinishedPulling="2026-02-17 14:01:17.847451922 +0000 UTC m=+967.482551355" observedRunningTime="2026-02-17 14:01:19.593958985 +0000 UTC m=+969.229058418" watchObservedRunningTime="2026-02-17 14:01:19.597237928 +0000 UTC m=+969.232337361" Feb 17 14:01:19 crc kubenswrapper[4833]: I0217 14:01:19.613093 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-7pspc" podStartSLOduration=5.019727692 podStartE2EDuration="28.613075278s" podCreationTimestamp="2026-02-17 14:00:51 +0000 UTC" firstStartedPulling="2026-02-17 14:00:54.160457346 +0000 UTC m=+943.795556779" lastFinishedPulling="2026-02-17 14:01:17.753804932 +0000 UTC m=+967.388904365" observedRunningTime="2026-02-17 14:01:19.612276165 +0000 UTC m=+969.247375598" watchObservedRunningTime="2026-02-17 14:01:19.613075278 +0000 UTC m=+969.248174711" Feb 17 14:01:19 crc kubenswrapper[4833]: I0217 14:01:19.647007 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-trcbs" podStartSLOduration=9.182198653 podStartE2EDuration="28.646990891s" podCreationTimestamp="2026-02-17 14:00:51 +0000 UTC" firstStartedPulling="2026-02-17 14:00:53.853830358 +0000 UTC m=+943.488929791" lastFinishedPulling="2026-02-17 14:01:13.318622596 +0000 UTC m=+962.953722029" observedRunningTime="2026-02-17 14:01:19.642071441 +0000 UTC m=+969.277170874" watchObservedRunningTime="2026-02-17 14:01:19.646990891 +0000 UTC m=+969.282090314" Feb 17 14:01:23 crc kubenswrapper[4833]: I0217 14:01:23.164634 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-n4rt9" Feb 17 14:01:23 crc kubenswrapper[4833]: I0217 14:01:23.737229 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ed6e734c-3eb1-47d0-ad82-aa5c934da55a-cert\") pod \"infra-operator-controller-manager-66d6b5f488-87mt8\" (UID: \"ed6e734c-3eb1-47d0-ad82-aa5c934da55a\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-87mt8" Feb 17 14:01:23 crc kubenswrapper[4833]: I0217 14:01:23.743967 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ed6e734c-3eb1-47d0-ad82-aa5c934da55a-cert\") pod \"infra-operator-controller-manager-66d6b5f488-87mt8\" (UID: \"ed6e734c-3eb1-47d0-ad82-aa5c934da55a\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-87mt8" Feb 17 14:01:23 crc kubenswrapper[4833]: I0217 14:01:23.832555 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-p4xtq" Feb 17 14:01:23 crc kubenswrapper[4833]: I0217 14:01:23.841963 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-87mt8" Feb 17 14:01:24 crc kubenswrapper[4833]: I0217 14:01:24.142763 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/032e7e1b-b6e8-4a69-905a-cd01e516f155-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-tjdm7\" (UID: \"032e7e1b-b6e8-4a69-905a-cd01e516f155\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-tjdm7" Feb 17 14:01:24 crc kubenswrapper[4833]: I0217 14:01:24.148284 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/032e7e1b-b6e8-4a69-905a-cd01e516f155-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-tjdm7\" (UID: \"032e7e1b-b6e8-4a69-905a-cd01e516f155\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-tjdm7" Feb 17 14:01:24 crc kubenswrapper[4833]: I0217 14:01:24.283358 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-66d6b5f488-87mt8"] Feb 17 14:01:24 crc kubenswrapper[4833]: W0217 14:01:24.284567 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded6e734c_3eb1_47d0_ad82_aa5c934da55a.slice/crio-0bfa086788805618716efb1ab2bf6b0d7b814e49505106872c62a69ad3791c08 WatchSource:0}: Error finding container 0bfa086788805618716efb1ab2bf6b0d7b814e49505106872c62a69ad3791c08: Status 404 returned error can't find the container with id 0bfa086788805618716efb1ab2bf6b0d7b814e49505106872c62a69ad3791c08 Feb 17 14:01:24 crc kubenswrapper[4833]: I0217 14:01:24.429846 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-h5bd9" Feb 17 14:01:24 crc kubenswrapper[4833]: I0217 14:01:24.438194 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-tjdm7" Feb 17 14:01:24 crc kubenswrapper[4833]: I0217 14:01:24.751784 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eaf6f3b9-35a6-4048-94cb-53563475161a-metrics-certs\") pod \"openstack-operator-controller-manager-5856dc4bfc-jh56r\" (UID: \"eaf6f3b9-35a6-4048-94cb-53563475161a\") " pod="openstack-operators/openstack-operator-controller-manager-5856dc4bfc-jh56r" Feb 17 14:01:24 crc kubenswrapper[4833]: I0217 14:01:24.751974 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/eaf6f3b9-35a6-4048-94cb-53563475161a-webhook-certs\") pod \"openstack-operator-controller-manager-5856dc4bfc-jh56r\" (UID: \"eaf6f3b9-35a6-4048-94cb-53563475161a\") " pod="openstack-operators/openstack-operator-controller-manager-5856dc4bfc-jh56r" Feb 17 14:01:24 crc kubenswrapper[4833]: I0217 14:01:24.759402 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eaf6f3b9-35a6-4048-94cb-53563475161a-metrics-certs\") pod \"openstack-operator-controller-manager-5856dc4bfc-jh56r\" (UID: \"eaf6f3b9-35a6-4048-94cb-53563475161a\") " pod="openstack-operators/openstack-operator-controller-manager-5856dc4bfc-jh56r" Feb 17 14:01:24 crc kubenswrapper[4833]: I0217 14:01:24.759517 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/eaf6f3b9-35a6-4048-94cb-53563475161a-webhook-certs\") pod \"openstack-operator-controller-manager-5856dc4bfc-jh56r\" (UID: \"eaf6f3b9-35a6-4048-94cb-53563475161a\") " pod="openstack-operators/openstack-operator-controller-manager-5856dc4bfc-jh56r" Feb 17 14:01:24 crc kubenswrapper[4833]: I0217 14:01:24.781376 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-5xrzh" Feb 17 14:01:24 crc kubenswrapper[4833]: I0217 14:01:24.790238 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5856dc4bfc-jh56r" Feb 17 14:01:24 crc kubenswrapper[4833]: I0217 14:01:24.874054 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-tjdm7"] Feb 17 14:01:24 crc kubenswrapper[4833]: I0217 14:01:24.884893 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-87mt8" event={"ID":"ed6e734c-3eb1-47d0-ad82-aa5c934da55a","Type":"ContainerStarted","Data":"0bfa086788805618716efb1ab2bf6b0d7b814e49505106872c62a69ad3791c08"} Feb 17 14:01:24 crc kubenswrapper[4833]: W0217 14:01:24.892741 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod032e7e1b_b6e8_4a69_905a_cd01e516f155.slice/crio-9052d063e510416822b62eac80ee2a8ec9eff31700bb4815410fabdb56dd4ec1 WatchSource:0}: Error finding container 9052d063e510416822b62eac80ee2a8ec9eff31700bb4815410fabdb56dd4ec1: Status 404 returned error can't find the container with id 9052d063e510416822b62eac80ee2a8ec9eff31700bb4815410fabdb56dd4ec1 Feb 17 14:01:25 crc kubenswrapper[4833]: I0217 14:01:25.011831 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5856dc4bfc-jh56r"] Feb 17 14:01:25 crc kubenswrapper[4833]: W0217 14:01:25.016650 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeaf6f3b9_35a6_4048_94cb_53563475161a.slice/crio-1637ed7a48049d68d027e3a3e53e3dc0dd6507a592c7eb9a2b63378e56119d71 WatchSource:0}: Error finding container 1637ed7a48049d68d027e3a3e53e3dc0dd6507a592c7eb9a2b63378e56119d71: Status 404 returned error can't find the container with id 1637ed7a48049d68d027e3a3e53e3dc0dd6507a592c7eb9a2b63378e56119d71 Feb 17 14:01:25 crc kubenswrapper[4833]: I0217 14:01:25.897197 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5856dc4bfc-jh56r" event={"ID":"eaf6f3b9-35a6-4048-94cb-53563475161a","Type":"ContainerStarted","Data":"2a3b5a99ea0965d0776a31cf9721918332ac5f5e6ff7cfa47134200d0e9983a0"} Feb 17 14:01:25 crc kubenswrapper[4833]: I0217 14:01:25.897555 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-5856dc4bfc-jh56r" Feb 17 14:01:25 crc kubenswrapper[4833]: I0217 14:01:25.897571 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5856dc4bfc-jh56r" event={"ID":"eaf6f3b9-35a6-4048-94cb-53563475161a","Type":"ContainerStarted","Data":"1637ed7a48049d68d027e3a3e53e3dc0dd6507a592c7eb9a2b63378e56119d71"} Feb 17 14:01:25 crc kubenswrapper[4833]: I0217 14:01:25.900449 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-tjdm7" event={"ID":"032e7e1b-b6e8-4a69-905a-cd01e516f155","Type":"ContainerStarted","Data":"9052d063e510416822b62eac80ee2a8ec9eff31700bb4815410fabdb56dd4ec1"} Feb 17 14:01:25 crc kubenswrapper[4833]: I0217 14:01:25.929425 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-5856dc4bfc-jh56r" podStartSLOduration=33.929408917 podStartE2EDuration="33.929408917s" podCreationTimestamp="2026-02-17 14:00:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:01:25.921743218 +0000 UTC m=+975.556842651" watchObservedRunningTime="2026-02-17 14:01:25.929408917 +0000 UTC m=+975.564508350" Feb 17 14:01:27 crc kubenswrapper[4833]: I0217 14:01:27.918289 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-87mt8" event={"ID":"ed6e734c-3eb1-47d0-ad82-aa5c934da55a","Type":"ContainerStarted","Data":"36e3b8cfe9ffbe85abd1d93ec47dee91815c26472ed1a99b45aa3b9fcd2de408"} Feb 17 14:01:27 crc kubenswrapper[4833]: I0217 14:01:27.918594 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-87mt8" Feb 17 14:01:27 crc kubenswrapper[4833]: I0217 14:01:27.920165 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-tjdm7" event={"ID":"032e7e1b-b6e8-4a69-905a-cd01e516f155","Type":"ContainerStarted","Data":"f4cd909857393123c202bc819b1231b24c8cd5077cafca9598bfbc6564f96fe4"} Feb 17 14:01:27 crc kubenswrapper[4833]: I0217 14:01:27.920846 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-tjdm7" Feb 17 14:01:27 crc kubenswrapper[4833]: I0217 14:01:27.923354 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-kst2n" event={"ID":"22eaba8f-f822-42a0-a01f-4b1bdd5c0728","Type":"ContainerStarted","Data":"42430c9e8fd26e518b76d38e133464be82e91dd1b94aede1f0688b96523b1ae8"} Feb 17 14:01:27 crc kubenswrapper[4833]: I0217 14:01:27.923560 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-kst2n" Feb 17 14:01:27 crc kubenswrapper[4833]: I0217 14:01:27.924915 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-pc29h" event={"ID":"3c850072-fbc2-4f6a-a59c-51498bc5aef2","Type":"ContainerStarted","Data":"adbab635a21b93c746b7e1e07bd42f7f02c7a8256aeb361e5ec05e3255ba38b8"} Feb 17 14:01:27 crc kubenswrapper[4833]: I0217 14:01:27.925066 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-pc29h" Feb 17 14:01:27 crc kubenswrapper[4833]: I0217 14:01:27.926217 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-59669cd6b8-hrbgp" event={"ID":"a40de3a4-5ce4-4c9b-a022-acf3aeb9b852","Type":"ContainerStarted","Data":"ee108d9fc1e23a92223b1e9bc6d63b393f924314b621850a30fb94d6f9da2b40"} Feb 17 14:01:27 crc kubenswrapper[4833]: I0217 14:01:27.926803 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-59669cd6b8-hrbgp" Feb 17 14:01:27 crc kubenswrapper[4833]: I0217 14:01:27.950144 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-87mt8" podStartSLOduration=34.49736558 podStartE2EDuration="36.950122767s" podCreationTimestamp="2026-02-17 14:00:51 +0000 UTC" firstStartedPulling="2026-02-17 14:01:24.286634329 +0000 UTC m=+973.921733762" lastFinishedPulling="2026-02-17 14:01:26.739391516 +0000 UTC m=+976.374490949" observedRunningTime="2026-02-17 14:01:27.939811021 +0000 UTC m=+977.574910474" watchObservedRunningTime="2026-02-17 14:01:27.950122767 +0000 UTC m=+977.585222230" Feb 17 14:01:27 crc kubenswrapper[4833]: I0217 14:01:27.987481 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-tjdm7" podStartSLOduration=35.138603898 podStartE2EDuration="36.987455536s" podCreationTimestamp="2026-02-17 14:00:51 +0000 UTC" firstStartedPulling="2026-02-17 14:01:24.895484869 +0000 UTC m=+974.530584292" lastFinishedPulling="2026-02-17 14:01:26.744336497 +0000 UTC m=+976.379435930" observedRunningTime="2026-02-17 14:01:27.980697462 +0000 UTC m=+977.615796915" watchObservedRunningTime="2026-02-17 14:01:27.987455536 +0000 UTC m=+977.622554979" Feb 17 14:01:28 crc kubenswrapper[4833]: I0217 14:01:28.005157 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-kst2n" podStartSLOduration=4.385774854 podStartE2EDuration="37.005132612s" podCreationTimestamp="2026-02-17 14:00:51 +0000 UTC" firstStartedPulling="2026-02-17 14:00:54.122435616 +0000 UTC m=+943.757535049" lastFinishedPulling="2026-02-17 14:01:26.741793374 +0000 UTC m=+976.376892807" observedRunningTime="2026-02-17 14:01:27.998650496 +0000 UTC m=+977.633749929" watchObservedRunningTime="2026-02-17 14:01:28.005132612 +0000 UTC m=+977.640232045" Feb 17 14:01:28 crc kubenswrapper[4833]: I0217 14:01:28.017933 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-59669cd6b8-hrbgp" podStartSLOduration=3.561968671 podStartE2EDuration="36.017919658s" podCreationTimestamp="2026-02-17 14:00:52 +0000 UTC" firstStartedPulling="2026-02-17 14:00:54.284504639 +0000 UTC m=+943.919604072" lastFinishedPulling="2026-02-17 14:01:26.740455626 +0000 UTC m=+976.375555059" observedRunningTime="2026-02-17 14:01:28.011651878 +0000 UTC m=+977.646751301" watchObservedRunningTime="2026-02-17 14:01:28.017919658 +0000 UTC m=+977.653019091" Feb 17 14:01:28 crc kubenswrapper[4833]: I0217 14:01:28.033270 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-pc29h" podStartSLOduration=3.380537899 podStartE2EDuration="37.033248136s" podCreationTimestamp="2026-02-17 14:00:51 +0000 UTC" firstStartedPulling="2026-02-17 14:00:53.84229285 +0000 UTC m=+943.477392283" lastFinishedPulling="2026-02-17 14:01:27.495003087 +0000 UTC m=+977.130102520" observedRunningTime="2026-02-17 14:01:28.029619233 +0000 UTC m=+977.664718686" watchObservedRunningTime="2026-02-17 14:01:28.033248136 +0000 UTC m=+977.668347569" Feb 17 14:01:31 crc kubenswrapper[4833]: I0217 14:01:31.890299 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-jjchm" Feb 17 14:01:31 crc kubenswrapper[4833]: I0217 14:01:31.897747 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-hdxb7" Feb 17 14:01:31 crc kubenswrapper[4833]: I0217 14:01:31.960035 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-zqqxg" Feb 17 14:01:31 crc kubenswrapper[4833]: I0217 14:01:31.969712 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-9595d6797-798bv" Feb 17 14:01:32 crc kubenswrapper[4833]: I0217 14:01:32.058826 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-kkrrz" Feb 17 14:01:32 crc kubenswrapper[4833]: I0217 14:01:32.164898 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-trcbs" Feb 17 14:01:32 crc kubenswrapper[4833]: I0217 14:01:32.209836 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-5lrx9" Feb 17 14:01:32 crc kubenswrapper[4833]: I0217 14:01:32.362922 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-pc29h" Feb 17 14:01:32 crc kubenswrapper[4833]: I0217 14:01:32.628555 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-kst2n" Feb 17 14:01:32 crc kubenswrapper[4833]: I0217 14:01:32.629029 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-brrjj" Feb 17 14:01:32 crc kubenswrapper[4833]: I0217 14:01:32.629502 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-rq2hb" Feb 17 14:01:32 crc kubenswrapper[4833]: I0217 14:01:32.630776 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-bffkj" Feb 17 14:01:32 crc kubenswrapper[4833]: I0217 14:01:32.633583 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-twx52" Feb 17 14:01:32 crc kubenswrapper[4833]: I0217 14:01:32.633994 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-dt8tb" Feb 17 14:01:32 crc kubenswrapper[4833]: I0217 14:01:32.719357 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-6sx42" Feb 17 14:01:32 crc kubenswrapper[4833]: I0217 14:01:32.757119 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-85c99d655-s4gk9" Feb 17 14:01:32 crc kubenswrapper[4833]: I0217 14:01:32.852734 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-7pspc" Feb 17 14:01:33 crc kubenswrapper[4833]: I0217 14:01:33.177478 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-59669cd6b8-hrbgp" Feb 17 14:01:33 crc kubenswrapper[4833]: I0217 14:01:33.848862 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-87mt8" Feb 17 14:01:34 crc kubenswrapper[4833]: I0217 14:01:34.447074 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-tjdm7" Feb 17 14:01:34 crc kubenswrapper[4833]: I0217 14:01:34.799966 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-5856dc4bfc-jh56r" Feb 17 14:01:39 crc kubenswrapper[4833]: I0217 14:01:39.093520 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-59669cd6b8-hrbgp"] Feb 17 14:01:39 crc kubenswrapper[4833]: I0217 14:01:39.093985 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/watcher-operator-controller-manager-59669cd6b8-hrbgp" podUID="a40de3a4-5ce4-4c9b-a022-acf3aeb9b852" containerName="manager" containerID="cri-o://ee108d9fc1e23a92223b1e9bc6d63b393f924314b621850a30fb94d6f9da2b40" gracePeriod=10 Feb 17 14:01:39 crc kubenswrapper[4833]: I0217 14:01:39.842910 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-59669cd6b8-76kht"] Feb 17 14:01:39 crc kubenswrapper[4833]: I0217 14:01:39.843783 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-59669cd6b8-76kht" Feb 17 14:01:39 crc kubenswrapper[4833]: I0217 14:01:39.864535 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-59669cd6b8-76kht"] Feb 17 14:01:40 crc kubenswrapper[4833]: I0217 14:01:40.001213 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khj48\" (UniqueName: \"kubernetes.io/projected/04089ef8-719d-419e-b776-cd18af4012b0-kube-api-access-khj48\") pod \"watcher-operator-controller-manager-59669cd6b8-76kht\" (UID: \"04089ef8-719d-419e-b776-cd18af4012b0\") " pod="openstack-operators/watcher-operator-controller-manager-59669cd6b8-76kht" Feb 17 14:01:40 crc kubenswrapper[4833]: I0217 14:01:40.034813 4833 generic.go:334] "Generic (PLEG): container finished" podID="a40de3a4-5ce4-4c9b-a022-acf3aeb9b852" containerID="ee108d9fc1e23a92223b1e9bc6d63b393f924314b621850a30fb94d6f9da2b40" exitCode=0 Feb 17 14:01:40 crc kubenswrapper[4833]: I0217 14:01:40.035123 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-59669cd6b8-hrbgp" event={"ID":"a40de3a4-5ce4-4c9b-a022-acf3aeb9b852","Type":"ContainerDied","Data":"ee108d9fc1e23a92223b1e9bc6d63b393f924314b621850a30fb94d6f9da2b40"} Feb 17 14:01:40 crc kubenswrapper[4833]: I0217 14:01:40.102775 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khj48\" (UniqueName: \"kubernetes.io/projected/04089ef8-719d-419e-b776-cd18af4012b0-kube-api-access-khj48\") pod \"watcher-operator-controller-manager-59669cd6b8-76kht\" (UID: \"04089ef8-719d-419e-b776-cd18af4012b0\") " pod="openstack-operators/watcher-operator-controller-manager-59669cd6b8-76kht" Feb 17 14:01:40 crc kubenswrapper[4833]: I0217 14:01:40.129988 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khj48\" (UniqueName: \"kubernetes.io/projected/04089ef8-719d-419e-b776-cd18af4012b0-kube-api-access-khj48\") pod \"watcher-operator-controller-manager-59669cd6b8-76kht\" (UID: \"04089ef8-719d-419e-b776-cd18af4012b0\") " pod="openstack-operators/watcher-operator-controller-manager-59669cd6b8-76kht" Feb 17 14:01:40 crc kubenswrapper[4833]: I0217 14:01:40.164447 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-59669cd6b8-76kht" Feb 17 14:01:40 crc kubenswrapper[4833]: I0217 14:01:40.173277 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-59669cd6b8-hrbgp" Feb 17 14:01:40 crc kubenswrapper[4833]: I0217 14:01:40.306559 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pggtw\" (UniqueName: \"kubernetes.io/projected/a40de3a4-5ce4-4c9b-a022-acf3aeb9b852-kube-api-access-pggtw\") pod \"a40de3a4-5ce4-4c9b-a022-acf3aeb9b852\" (UID: \"a40de3a4-5ce4-4c9b-a022-acf3aeb9b852\") " Feb 17 14:01:40 crc kubenswrapper[4833]: I0217 14:01:40.312355 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a40de3a4-5ce4-4c9b-a022-acf3aeb9b852-kube-api-access-pggtw" (OuterVolumeSpecName: "kube-api-access-pggtw") pod "a40de3a4-5ce4-4c9b-a022-acf3aeb9b852" (UID: "a40de3a4-5ce4-4c9b-a022-acf3aeb9b852"). InnerVolumeSpecName "kube-api-access-pggtw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:01:40 crc kubenswrapper[4833]: I0217 14:01:40.408091 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pggtw\" (UniqueName: \"kubernetes.io/projected/a40de3a4-5ce4-4c9b-a022-acf3aeb9b852-kube-api-access-pggtw\") on node \"crc\" DevicePath \"\"" Feb 17 14:01:40 crc kubenswrapper[4833]: I0217 14:01:40.602897 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-59669cd6b8-76kht"] Feb 17 14:01:40 crc kubenswrapper[4833]: W0217 14:01:40.605169 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod04089ef8_719d_419e_b776_cd18af4012b0.slice/crio-4d5ac2db0798ffe90ac5b5e06fdf093736b9fa7b7a64cbda2c5e117d20250ea5 WatchSource:0}: Error finding container 4d5ac2db0798ffe90ac5b5e06fdf093736b9fa7b7a64cbda2c5e117d20250ea5: Status 404 returned error can't find the container with id 4d5ac2db0798ffe90ac5b5e06fdf093736b9fa7b7a64cbda2c5e117d20250ea5 Feb 17 14:01:41 crc kubenswrapper[4833]: I0217 14:01:41.044855 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-59669cd6b8-hrbgp" Feb 17 14:01:41 crc kubenswrapper[4833]: I0217 14:01:41.055375 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-59669cd6b8-76kht" Feb 17 14:01:41 crc kubenswrapper[4833]: I0217 14:01:41.055413 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-59669cd6b8-hrbgp" event={"ID":"a40de3a4-5ce4-4c9b-a022-acf3aeb9b852","Type":"ContainerDied","Data":"846c34a84d3b1bc98c190d0654914eef2dbe22d0acca4caca2ca767b1ca96d41"} Feb 17 14:01:41 crc kubenswrapper[4833]: I0217 14:01:41.055439 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-59669cd6b8-76kht" event={"ID":"04089ef8-719d-419e-b776-cd18af4012b0","Type":"ContainerStarted","Data":"619cdab692c8f4d8ab18df5f3dbd1c4ca4aa6b347fd6cca5e177c09892df2998"} Feb 17 14:01:41 crc kubenswrapper[4833]: I0217 14:01:41.055451 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-59669cd6b8-76kht" event={"ID":"04089ef8-719d-419e-b776-cd18af4012b0","Type":"ContainerStarted","Data":"4d5ac2db0798ffe90ac5b5e06fdf093736b9fa7b7a64cbda2c5e117d20250ea5"} Feb 17 14:01:41 crc kubenswrapper[4833]: I0217 14:01:41.055470 4833 scope.go:117] "RemoveContainer" containerID="ee108d9fc1e23a92223b1e9bc6d63b393f924314b621850a30fb94d6f9da2b40" Feb 17 14:01:41 crc kubenswrapper[4833]: I0217 14:01:41.089444 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-59669cd6b8-76kht" podStartSLOduration=2.089424597 podStartE2EDuration="2.089424597s" podCreationTimestamp="2026-02-17 14:01:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:01:41.082764076 +0000 UTC m=+990.717863519" watchObservedRunningTime="2026-02-17 14:01:41.089424597 +0000 UTC m=+990.724524030" Feb 17 14:01:41 crc kubenswrapper[4833]: I0217 14:01:41.108194 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-59669cd6b8-hrbgp"] Feb 17 14:01:41 crc kubenswrapper[4833]: I0217 14:01:41.113016 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-59669cd6b8-hrbgp"] Feb 17 14:01:41 crc kubenswrapper[4833]: I0217 14:01:41.525526 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-59669cd6b8-76kht"] Feb 17 14:01:42 crc kubenswrapper[4833]: I0217 14:01:42.259984 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-59669cd6b8-vzhgx"] Feb 17 14:01:42 crc kubenswrapper[4833]: E0217 14:01:42.260347 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a40de3a4-5ce4-4c9b-a022-acf3aeb9b852" containerName="manager" Feb 17 14:01:42 crc kubenswrapper[4833]: I0217 14:01:42.260363 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="a40de3a4-5ce4-4c9b-a022-acf3aeb9b852" containerName="manager" Feb 17 14:01:42 crc kubenswrapper[4833]: I0217 14:01:42.260542 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="a40de3a4-5ce4-4c9b-a022-acf3aeb9b852" containerName="manager" Feb 17 14:01:42 crc kubenswrapper[4833]: I0217 14:01:42.261190 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-59669cd6b8-vzhgx" Feb 17 14:01:42 crc kubenswrapper[4833]: I0217 14:01:42.272508 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-59669cd6b8-vzhgx"] Feb 17 14:01:42 crc kubenswrapper[4833]: I0217 14:01:42.335514 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndssh\" (UniqueName: \"kubernetes.io/projected/c059d8ff-7282-40c8-8a7e-542d11569869-kube-api-access-ndssh\") pod \"watcher-operator-controller-manager-59669cd6b8-vzhgx\" (UID: \"c059d8ff-7282-40c8-8a7e-542d11569869\") " pod="openstack-operators/watcher-operator-controller-manager-59669cd6b8-vzhgx" Feb 17 14:01:42 crc kubenswrapper[4833]: I0217 14:01:42.438950 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndssh\" (UniqueName: \"kubernetes.io/projected/c059d8ff-7282-40c8-8a7e-542d11569869-kube-api-access-ndssh\") pod \"watcher-operator-controller-manager-59669cd6b8-vzhgx\" (UID: \"c059d8ff-7282-40c8-8a7e-542d11569869\") " pod="openstack-operators/watcher-operator-controller-manager-59669cd6b8-vzhgx" Feb 17 14:01:42 crc kubenswrapper[4833]: I0217 14:01:42.456883 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndssh\" (UniqueName: \"kubernetes.io/projected/c059d8ff-7282-40c8-8a7e-542d11569869-kube-api-access-ndssh\") pod \"watcher-operator-controller-manager-59669cd6b8-vzhgx\" (UID: \"c059d8ff-7282-40c8-8a7e-542d11569869\") " pod="openstack-operators/watcher-operator-controller-manager-59669cd6b8-vzhgx" Feb 17 14:01:42 crc kubenswrapper[4833]: I0217 14:01:42.578948 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-59669cd6b8-vzhgx" Feb 17 14:01:43 crc kubenswrapper[4833]: I0217 14:01:43.050834 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a40de3a4-5ce4-4c9b-a022-acf3aeb9b852" path="/var/lib/kubelet/pods/a40de3a4-5ce4-4c9b-a022-acf3aeb9b852/volumes" Feb 17 14:01:43 crc kubenswrapper[4833]: I0217 14:01:43.061370 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/watcher-operator-controller-manager-59669cd6b8-76kht" podUID="04089ef8-719d-419e-b776-cd18af4012b0" containerName="manager" containerID="cri-o://619cdab692c8f4d8ab18df5f3dbd1c4ca4aa6b347fd6cca5e177c09892df2998" gracePeriod=10 Feb 17 14:01:43 crc kubenswrapper[4833]: I0217 14:01:43.151908 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-59669cd6b8-vzhgx"] Feb 17 14:01:43 crc kubenswrapper[4833]: W0217 14:01:43.160549 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc059d8ff_7282_40c8_8a7e_542d11569869.slice/crio-2bd2450657c4d47a8c7939d38e1111ab495b73e7938b9805a120328de82e13d4 WatchSource:0}: Error finding container 2bd2450657c4d47a8c7939d38e1111ab495b73e7938b9805a120328de82e13d4: Status 404 returned error can't find the container with id 2bd2450657c4d47a8c7939d38e1111ab495b73e7938b9805a120328de82e13d4 Feb 17 14:01:43 crc kubenswrapper[4833]: I0217 14:01:43.399865 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-59669cd6b8-76kht" Feb 17 14:01:43 crc kubenswrapper[4833]: I0217 14:01:43.489940 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-controller-init-d585dc784-s6xrt"] Feb 17 14:01:43 crc kubenswrapper[4833]: I0217 14:01:43.490283 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-controller-init-d585dc784-s6xrt" podUID="a63e4e29-66ee-444a-9c42-f91149922c13" containerName="operator" containerID="cri-o://85b15e27b00ac0c50a7adcc677e346be4e2c45c57e625525f00c438013c8a94a" gracePeriod=10 Feb 17 14:01:43 crc kubenswrapper[4833]: I0217 14:01:43.553534 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khj48\" (UniqueName: \"kubernetes.io/projected/04089ef8-719d-419e-b776-cd18af4012b0-kube-api-access-khj48\") pod \"04089ef8-719d-419e-b776-cd18af4012b0\" (UID: \"04089ef8-719d-419e-b776-cd18af4012b0\") " Feb 17 14:01:43 crc kubenswrapper[4833]: I0217 14:01:43.559647 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04089ef8-719d-419e-b776-cd18af4012b0-kube-api-access-khj48" (OuterVolumeSpecName: "kube-api-access-khj48") pod "04089ef8-719d-419e-b776-cd18af4012b0" (UID: "04089ef8-719d-419e-b776-cd18af4012b0"). InnerVolumeSpecName "kube-api-access-khj48". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:01:43 crc kubenswrapper[4833]: I0217 14:01:43.655602 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khj48\" (UniqueName: \"kubernetes.io/projected/04089ef8-719d-419e-b776-cd18af4012b0-kube-api-access-khj48\") on node \"crc\" DevicePath \"\"" Feb 17 14:01:43 crc kubenswrapper[4833]: I0217 14:01:43.884566 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-d585dc784-s6xrt" Feb 17 14:01:44 crc kubenswrapper[4833]: I0217 14:01:44.059514 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxwdp\" (UniqueName: \"kubernetes.io/projected/a63e4e29-66ee-444a-9c42-f91149922c13-kube-api-access-lxwdp\") pod \"a63e4e29-66ee-444a-9c42-f91149922c13\" (UID: \"a63e4e29-66ee-444a-9c42-f91149922c13\") " Feb 17 14:01:44 crc kubenswrapper[4833]: I0217 14:01:44.066724 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a63e4e29-66ee-444a-9c42-f91149922c13-kube-api-access-lxwdp" (OuterVolumeSpecName: "kube-api-access-lxwdp") pod "a63e4e29-66ee-444a-9c42-f91149922c13" (UID: "a63e4e29-66ee-444a-9c42-f91149922c13"). InnerVolumeSpecName "kube-api-access-lxwdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:01:44 crc kubenswrapper[4833]: I0217 14:01:44.070552 4833 generic.go:334] "Generic (PLEG): container finished" podID="04089ef8-719d-419e-b776-cd18af4012b0" containerID="619cdab692c8f4d8ab18df5f3dbd1c4ca4aa6b347fd6cca5e177c09892df2998" exitCode=0 Feb 17 14:01:44 crc kubenswrapper[4833]: I0217 14:01:44.070614 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-59669cd6b8-76kht" event={"ID":"04089ef8-719d-419e-b776-cd18af4012b0","Type":"ContainerDied","Data":"619cdab692c8f4d8ab18df5f3dbd1c4ca4aa6b347fd6cca5e177c09892df2998"} Feb 17 14:01:44 crc kubenswrapper[4833]: I0217 14:01:44.070640 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-59669cd6b8-76kht" event={"ID":"04089ef8-719d-419e-b776-cd18af4012b0","Type":"ContainerDied","Data":"4d5ac2db0798ffe90ac5b5e06fdf093736b9fa7b7a64cbda2c5e117d20250ea5"} Feb 17 14:01:44 crc kubenswrapper[4833]: I0217 14:01:44.070657 4833 scope.go:117] "RemoveContainer" containerID="619cdab692c8f4d8ab18df5f3dbd1c4ca4aa6b347fd6cca5e177c09892df2998" Feb 17 14:01:44 crc kubenswrapper[4833]: I0217 14:01:44.071097 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-59669cd6b8-76kht" Feb 17 14:01:44 crc kubenswrapper[4833]: I0217 14:01:44.072177 4833 generic.go:334] "Generic (PLEG): container finished" podID="a63e4e29-66ee-444a-9c42-f91149922c13" containerID="85b15e27b00ac0c50a7adcc677e346be4e2c45c57e625525f00c438013c8a94a" exitCode=0 Feb 17 14:01:44 crc kubenswrapper[4833]: I0217 14:01:44.072203 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-d585dc784-s6xrt" event={"ID":"a63e4e29-66ee-444a-9c42-f91149922c13","Type":"ContainerDied","Data":"85b15e27b00ac0c50a7adcc677e346be4e2c45c57e625525f00c438013c8a94a"} Feb 17 14:01:44 crc kubenswrapper[4833]: I0217 14:01:44.072229 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-d585dc784-s6xrt" Feb 17 14:01:44 crc kubenswrapper[4833]: I0217 14:01:44.072234 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-d585dc784-s6xrt" event={"ID":"a63e4e29-66ee-444a-9c42-f91149922c13","Type":"ContainerDied","Data":"f172822b58b77a02a09f60698f199837113d698da2483ba08a4aea2f8ecdce2b"} Feb 17 14:01:44 crc kubenswrapper[4833]: I0217 14:01:44.073457 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-59669cd6b8-vzhgx" event={"ID":"c059d8ff-7282-40c8-8a7e-542d11569869","Type":"ContainerStarted","Data":"1b74d0dbd7e341002ba49e61b699657ae8e97b3d6f108798e118f49623a9dd78"} Feb 17 14:01:44 crc kubenswrapper[4833]: I0217 14:01:44.073475 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-59669cd6b8-vzhgx" event={"ID":"c059d8ff-7282-40c8-8a7e-542d11569869","Type":"ContainerStarted","Data":"2bd2450657c4d47a8c7939d38e1111ab495b73e7938b9805a120328de82e13d4"} Feb 17 14:01:44 crc kubenswrapper[4833]: I0217 14:01:44.073632 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-59669cd6b8-vzhgx" Feb 17 14:01:44 crc kubenswrapper[4833]: I0217 14:01:44.102603 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-59669cd6b8-vzhgx" podStartSLOduration=2.102586627 podStartE2EDuration="2.102586627s" podCreationTimestamp="2026-02-17 14:01:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:01:44.098519611 +0000 UTC m=+993.733619044" watchObservedRunningTime="2026-02-17 14:01:44.102586627 +0000 UTC m=+993.737686060" Feb 17 14:01:44 crc kubenswrapper[4833]: I0217 14:01:44.119719 4833 scope.go:117] "RemoveContainer" containerID="619cdab692c8f4d8ab18df5f3dbd1c4ca4aa6b347fd6cca5e177c09892df2998" Feb 17 14:01:44 crc kubenswrapper[4833]: E0217 14:01:44.120379 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"619cdab692c8f4d8ab18df5f3dbd1c4ca4aa6b347fd6cca5e177c09892df2998\": container with ID starting with 619cdab692c8f4d8ab18df5f3dbd1c4ca4aa6b347fd6cca5e177c09892df2998 not found: ID does not exist" containerID="619cdab692c8f4d8ab18df5f3dbd1c4ca4aa6b347fd6cca5e177c09892df2998" Feb 17 14:01:44 crc kubenswrapper[4833]: I0217 14:01:44.120426 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"619cdab692c8f4d8ab18df5f3dbd1c4ca4aa6b347fd6cca5e177c09892df2998"} err="failed to get container status \"619cdab692c8f4d8ab18df5f3dbd1c4ca4aa6b347fd6cca5e177c09892df2998\": rpc error: code = NotFound desc = could not find container \"619cdab692c8f4d8ab18df5f3dbd1c4ca4aa6b347fd6cca5e177c09892df2998\": container with ID starting with 619cdab692c8f4d8ab18df5f3dbd1c4ca4aa6b347fd6cca5e177c09892df2998 not found: ID does not exist" Feb 17 14:01:44 crc kubenswrapper[4833]: I0217 14:01:44.120474 4833 scope.go:117] "RemoveContainer" containerID="85b15e27b00ac0c50a7adcc677e346be4e2c45c57e625525f00c438013c8a94a" Feb 17 14:01:44 crc kubenswrapper[4833]: I0217 14:01:44.128116 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-59669cd6b8-76kht"] Feb 17 14:01:44 crc kubenswrapper[4833]: I0217 14:01:44.140304 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-59669cd6b8-76kht"] Feb 17 14:01:44 crc kubenswrapper[4833]: I0217 14:01:44.147473 4833 scope.go:117] "RemoveContainer" containerID="85b15e27b00ac0c50a7adcc677e346be4e2c45c57e625525f00c438013c8a94a" Feb 17 14:01:44 crc kubenswrapper[4833]: I0217 14:01:44.147635 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-controller-init-d585dc784-s6xrt"] Feb 17 14:01:44 crc kubenswrapper[4833]: E0217 14:01:44.147772 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85b15e27b00ac0c50a7adcc677e346be4e2c45c57e625525f00c438013c8a94a\": container with ID starting with 85b15e27b00ac0c50a7adcc677e346be4e2c45c57e625525f00c438013c8a94a not found: ID does not exist" containerID="85b15e27b00ac0c50a7adcc677e346be4e2c45c57e625525f00c438013c8a94a" Feb 17 14:01:44 crc kubenswrapper[4833]: I0217 14:01:44.147795 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85b15e27b00ac0c50a7adcc677e346be4e2c45c57e625525f00c438013c8a94a"} err="failed to get container status \"85b15e27b00ac0c50a7adcc677e346be4e2c45c57e625525f00c438013c8a94a\": rpc error: code = NotFound desc = could not find container \"85b15e27b00ac0c50a7adcc677e346be4e2c45c57e625525f00c438013c8a94a\": container with ID starting with 85b15e27b00ac0c50a7adcc677e346be4e2c45c57e625525f00c438013c8a94a not found: ID does not exist" Feb 17 14:01:44 crc kubenswrapper[4833]: I0217 14:01:44.153228 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-controller-init-d585dc784-s6xrt"] Feb 17 14:01:44 crc kubenswrapper[4833]: I0217 14:01:44.161608 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxwdp\" (UniqueName: \"kubernetes.io/projected/a63e4e29-66ee-444a-9c42-f91149922c13-kube-api-access-lxwdp\") on node \"crc\" DevicePath \"\"" Feb 17 14:01:44 crc kubenswrapper[4833]: I0217 14:01:44.634086 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-index-vnj9m"] Feb 17 14:01:44 crc kubenswrapper[4833]: E0217 14:01:44.637663 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a63e4e29-66ee-444a-9c42-f91149922c13" containerName="operator" Feb 17 14:01:44 crc kubenswrapper[4833]: I0217 14:01:44.637713 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="a63e4e29-66ee-444a-9c42-f91149922c13" containerName="operator" Feb 17 14:01:44 crc kubenswrapper[4833]: E0217 14:01:44.637740 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04089ef8-719d-419e-b776-cd18af4012b0" containerName="manager" Feb 17 14:01:44 crc kubenswrapper[4833]: I0217 14:01:44.637748 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="04089ef8-719d-419e-b776-cd18af4012b0" containerName="manager" Feb 17 14:01:44 crc kubenswrapper[4833]: I0217 14:01:44.637983 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="a63e4e29-66ee-444a-9c42-f91149922c13" containerName="operator" Feb 17 14:01:44 crc kubenswrapper[4833]: I0217 14:01:44.638018 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="04089ef8-719d-419e-b776-cd18af4012b0" containerName="manager" Feb 17 14:01:44 crc kubenswrapper[4833]: I0217 14:01:44.638857 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-index-vnj9m" Feb 17 14:01:44 crc kubenswrapper[4833]: I0217 14:01:44.641595 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-index-dockercfg-czkg5" Feb 17 14:01:44 crc kubenswrapper[4833]: I0217 14:01:44.643464 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-index-vnj9m"] Feb 17 14:01:44 crc kubenswrapper[4833]: I0217 14:01:44.671352 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qcv7\" (UniqueName: \"kubernetes.io/projected/3d35c79f-8d8d-439a-9daf-a7bc5364192b-kube-api-access-9qcv7\") pod \"watcher-operator-index-vnj9m\" (UID: \"3d35c79f-8d8d-439a-9daf-a7bc5364192b\") " pod="openstack-operators/watcher-operator-index-vnj9m" Feb 17 14:01:44 crc kubenswrapper[4833]: I0217 14:01:44.772408 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qcv7\" (UniqueName: \"kubernetes.io/projected/3d35c79f-8d8d-439a-9daf-a7bc5364192b-kube-api-access-9qcv7\") pod \"watcher-operator-index-vnj9m\" (UID: \"3d35c79f-8d8d-439a-9daf-a7bc5364192b\") " pod="openstack-operators/watcher-operator-index-vnj9m" Feb 17 14:01:44 crc kubenswrapper[4833]: I0217 14:01:44.790564 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qcv7\" (UniqueName: \"kubernetes.io/projected/3d35c79f-8d8d-439a-9daf-a7bc5364192b-kube-api-access-9qcv7\") pod \"watcher-operator-index-vnj9m\" (UID: \"3d35c79f-8d8d-439a-9daf-a7bc5364192b\") " pod="openstack-operators/watcher-operator-index-vnj9m" Feb 17 14:01:44 crc kubenswrapper[4833]: I0217 14:01:44.964449 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-index-vnj9m" Feb 17 14:01:45 crc kubenswrapper[4833]: I0217 14:01:45.052620 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04089ef8-719d-419e-b776-cd18af4012b0" path="/var/lib/kubelet/pods/04089ef8-719d-419e-b776-cd18af4012b0/volumes" Feb 17 14:01:45 crc kubenswrapper[4833]: I0217 14:01:45.053330 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a63e4e29-66ee-444a-9c42-f91149922c13" path="/var/lib/kubelet/pods/a63e4e29-66ee-444a-9c42-f91149922c13/volumes" Feb 17 14:01:45 crc kubenswrapper[4833]: I0217 14:01:45.466066 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-index-vnj9m"] Feb 17 14:01:45 crc kubenswrapper[4833]: W0217 14:01:45.468243 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d35c79f_8d8d_439a_9daf_a7bc5364192b.slice/crio-6f9e082b408bf4c17b1408f804df08ae6cf77d1a4878cad539891d77a316d411 WatchSource:0}: Error finding container 6f9e082b408bf4c17b1408f804df08ae6cf77d1a4878cad539891d77a316d411: Status 404 returned error can't find the container with id 6f9e082b408bf4c17b1408f804df08ae6cf77d1a4878cad539891d77a316d411 Feb 17 14:01:46 crc kubenswrapper[4833]: I0217 14:01:46.146242 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-index-vnj9m" event={"ID":"3d35c79f-8d8d-439a-9daf-a7bc5364192b","Type":"ContainerStarted","Data":"272c20c22e3875aaccacba898130881a34d9b75b994392c53dbed17740767ba7"} Feb 17 14:01:46 crc kubenswrapper[4833]: I0217 14:01:46.147318 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-index-vnj9m" event={"ID":"3d35c79f-8d8d-439a-9daf-a7bc5364192b","Type":"ContainerStarted","Data":"6f9e082b408bf4c17b1408f804df08ae6cf77d1a4878cad539891d77a316d411"} Feb 17 14:01:46 crc kubenswrapper[4833]: I0217 14:01:46.164221 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-index-vnj9m" podStartSLOduration=1.994270353 podStartE2EDuration="2.164204768s" podCreationTimestamp="2026-02-17 14:01:44 +0000 UTC" firstStartedPulling="2026-02-17 14:01:45.470663213 +0000 UTC m=+995.105762646" lastFinishedPulling="2026-02-17 14:01:45.640597618 +0000 UTC m=+995.275697061" observedRunningTime="2026-02-17 14:01:46.162835829 +0000 UTC m=+995.797935262" watchObservedRunningTime="2026-02-17 14:01:46.164204768 +0000 UTC m=+995.799304201" Feb 17 14:01:48 crc kubenswrapper[4833]: I0217 14:01:48.437439 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/watcher-operator-index-vnj9m"] Feb 17 14:01:48 crc kubenswrapper[4833]: I0217 14:01:48.438107 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/watcher-operator-index-vnj9m" podUID="3d35c79f-8d8d-439a-9daf-a7bc5364192b" containerName="registry-server" containerID="cri-o://272c20c22e3875aaccacba898130881a34d9b75b994392c53dbed17740767ba7" gracePeriod=2 Feb 17 14:01:49 crc kubenswrapper[4833]: I0217 14:01:49.056079 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-index-nrjjw"] Feb 17 14:01:49 crc kubenswrapper[4833]: I0217 14:01:49.057166 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-index-nrjjw" Feb 17 14:01:49 crc kubenswrapper[4833]: I0217 14:01:49.060435 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-index-nrjjw"] Feb 17 14:01:49 crc kubenswrapper[4833]: I0217 14:01:49.183304 4833 generic.go:334] "Generic (PLEG): container finished" podID="3d35c79f-8d8d-439a-9daf-a7bc5364192b" containerID="272c20c22e3875aaccacba898130881a34d9b75b994392c53dbed17740767ba7" exitCode=0 Feb 17 14:01:49 crc kubenswrapper[4833]: I0217 14:01:49.183345 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-index-vnj9m" event={"ID":"3d35c79f-8d8d-439a-9daf-a7bc5364192b","Type":"ContainerDied","Data":"272c20c22e3875aaccacba898130881a34d9b75b994392c53dbed17740767ba7"} Feb 17 14:01:49 crc kubenswrapper[4833]: I0217 14:01:49.243197 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d9s9\" (UniqueName: \"kubernetes.io/projected/573bc2ac-05e9-4a4e-89dd-497972463c4b-kube-api-access-2d9s9\") pod \"watcher-operator-index-nrjjw\" (UID: \"573bc2ac-05e9-4a4e-89dd-497972463c4b\") " pod="openstack-operators/watcher-operator-index-nrjjw" Feb 17 14:01:49 crc kubenswrapper[4833]: I0217 14:01:49.345260 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2d9s9\" (UniqueName: \"kubernetes.io/projected/573bc2ac-05e9-4a4e-89dd-497972463c4b-kube-api-access-2d9s9\") pod \"watcher-operator-index-nrjjw\" (UID: \"573bc2ac-05e9-4a4e-89dd-497972463c4b\") " pod="openstack-operators/watcher-operator-index-nrjjw" Feb 17 14:01:49 crc kubenswrapper[4833]: I0217 14:01:49.373846 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2d9s9\" (UniqueName: \"kubernetes.io/projected/573bc2ac-05e9-4a4e-89dd-497972463c4b-kube-api-access-2d9s9\") pod \"watcher-operator-index-nrjjw\" (UID: \"573bc2ac-05e9-4a4e-89dd-497972463c4b\") " pod="openstack-operators/watcher-operator-index-nrjjw" Feb 17 14:01:49 crc kubenswrapper[4833]: I0217 14:01:49.376517 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-index-nrjjw" Feb 17 14:01:49 crc kubenswrapper[4833]: I0217 14:01:49.629895 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-index-nrjjw"] Feb 17 14:01:50 crc kubenswrapper[4833]: I0217 14:01:50.192069 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-index-nrjjw" event={"ID":"573bc2ac-05e9-4a4e-89dd-497972463c4b","Type":"ContainerStarted","Data":"88396b4869ea730d3e75050ac4a8d3c6e412bd025204a21e961cc58028479a38"} Feb 17 14:01:50 crc kubenswrapper[4833]: I0217 14:01:50.636543 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-index-vnj9m" Feb 17 14:01:50 crc kubenswrapper[4833]: I0217 14:01:50.670234 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qcv7\" (UniqueName: \"kubernetes.io/projected/3d35c79f-8d8d-439a-9daf-a7bc5364192b-kube-api-access-9qcv7\") pod \"3d35c79f-8d8d-439a-9daf-a7bc5364192b\" (UID: \"3d35c79f-8d8d-439a-9daf-a7bc5364192b\") " Feb 17 14:01:50 crc kubenswrapper[4833]: I0217 14:01:50.673529 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d35c79f-8d8d-439a-9daf-a7bc5364192b-kube-api-access-9qcv7" (OuterVolumeSpecName: "kube-api-access-9qcv7") pod "3d35c79f-8d8d-439a-9daf-a7bc5364192b" (UID: "3d35c79f-8d8d-439a-9daf-a7bc5364192b"). InnerVolumeSpecName "kube-api-access-9qcv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:01:50 crc kubenswrapper[4833]: I0217 14:01:50.772004 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qcv7\" (UniqueName: \"kubernetes.io/projected/3d35c79f-8d8d-439a-9daf-a7bc5364192b-kube-api-access-9qcv7\") on node \"crc\" DevicePath \"\"" Feb 17 14:01:51 crc kubenswrapper[4833]: I0217 14:01:51.201421 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-index-vnj9m" event={"ID":"3d35c79f-8d8d-439a-9daf-a7bc5364192b","Type":"ContainerDied","Data":"6f9e082b408bf4c17b1408f804df08ae6cf77d1a4878cad539891d77a316d411"} Feb 17 14:01:51 crc kubenswrapper[4833]: I0217 14:01:51.201475 4833 scope.go:117] "RemoveContainer" containerID="272c20c22e3875aaccacba898130881a34d9b75b994392c53dbed17740767ba7" Feb 17 14:01:51 crc kubenswrapper[4833]: I0217 14:01:51.201589 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-index-vnj9m" Feb 17 14:01:51 crc kubenswrapper[4833]: I0217 14:01:51.234101 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/watcher-operator-index-vnj9m"] Feb 17 14:01:51 crc kubenswrapper[4833]: I0217 14:01:51.243304 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/watcher-operator-index-vnj9m"] Feb 17 14:01:52 crc kubenswrapper[4833]: I0217 14:01:52.211187 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-index-nrjjw" event={"ID":"573bc2ac-05e9-4a4e-89dd-497972463c4b","Type":"ContainerStarted","Data":"7678123d2f1da2605f5ff4e49b59f756750fec739171f0df2b4dbd992d0d5f63"} Feb 17 14:01:52 crc kubenswrapper[4833]: I0217 14:01:52.234650 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-index-nrjjw" podStartSLOduration=1.6333139399999999 podStartE2EDuration="3.234627682s" podCreationTimestamp="2026-02-17 14:01:49 +0000 UTC" firstStartedPulling="2026-02-17 14:01:49.636291136 +0000 UTC m=+999.271390569" lastFinishedPulling="2026-02-17 14:01:51.237604868 +0000 UTC m=+1000.872704311" observedRunningTime="2026-02-17 14:01:52.228093795 +0000 UTC m=+1001.863193278" watchObservedRunningTime="2026-02-17 14:01:52.234627682 +0000 UTC m=+1001.869727135" Feb 17 14:01:52 crc kubenswrapper[4833]: I0217 14:01:52.582658 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-59669cd6b8-vzhgx" Feb 17 14:01:53 crc kubenswrapper[4833]: I0217 14:01:53.050166 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d35c79f-8d8d-439a-9daf-a7bc5364192b" path="/var/lib/kubelet/pods/3d35c79f-8d8d-439a-9daf-a7bc5364192b/volumes" Feb 17 14:01:59 crc kubenswrapper[4833]: I0217 14:01:59.377228 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/watcher-operator-index-nrjjw" Feb 17 14:01:59 crc kubenswrapper[4833]: I0217 14:01:59.377939 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-index-nrjjw" Feb 17 14:01:59 crc kubenswrapper[4833]: I0217 14:01:59.409023 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/watcher-operator-index-nrjjw" Feb 17 14:02:00 crc kubenswrapper[4833]: I0217 14:02:00.294101 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-index-nrjjw" Feb 17 14:02:02 crc kubenswrapper[4833]: I0217 14:02:02.275818 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/4cfa81efd7a1b8ba99823c5bbcbfa8e1271e52459ab2f2e46cc7719a7dctm88"] Feb 17 14:02:02 crc kubenswrapper[4833]: E0217 14:02:02.276389 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d35c79f-8d8d-439a-9daf-a7bc5364192b" containerName="registry-server" Feb 17 14:02:02 crc kubenswrapper[4833]: I0217 14:02:02.276401 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d35c79f-8d8d-439a-9daf-a7bc5364192b" containerName="registry-server" Feb 17 14:02:02 crc kubenswrapper[4833]: I0217 14:02:02.276537 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d35c79f-8d8d-439a-9daf-a7bc5364192b" containerName="registry-server" Feb 17 14:02:02 crc kubenswrapper[4833]: I0217 14:02:02.277513 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/4cfa81efd7a1b8ba99823c5bbcbfa8e1271e52459ab2f2e46cc7719a7dctm88" Feb 17 14:02:02 crc kubenswrapper[4833]: I0217 14:02:02.297239 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-zhldv" Feb 17 14:02:02 crc kubenswrapper[4833]: I0217 14:02:02.307050 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/4cfa81efd7a1b8ba99823c5bbcbfa8e1271e52459ab2f2e46cc7719a7dctm88"] Feb 17 14:02:02 crc kubenswrapper[4833]: I0217 14:02:02.345542 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltfgb\" (UniqueName: \"kubernetes.io/projected/38b744d8-d892-4318-bb73-8a8c3e9f22d4-kube-api-access-ltfgb\") pod \"4cfa81efd7a1b8ba99823c5bbcbfa8e1271e52459ab2f2e46cc7719a7dctm88\" (UID: \"38b744d8-d892-4318-bb73-8a8c3e9f22d4\") " pod="openstack-operators/4cfa81efd7a1b8ba99823c5bbcbfa8e1271e52459ab2f2e46cc7719a7dctm88" Feb 17 14:02:02 crc kubenswrapper[4833]: I0217 14:02:02.345647 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/38b744d8-d892-4318-bb73-8a8c3e9f22d4-bundle\") pod \"4cfa81efd7a1b8ba99823c5bbcbfa8e1271e52459ab2f2e46cc7719a7dctm88\" (UID: \"38b744d8-d892-4318-bb73-8a8c3e9f22d4\") " pod="openstack-operators/4cfa81efd7a1b8ba99823c5bbcbfa8e1271e52459ab2f2e46cc7719a7dctm88" Feb 17 14:02:02 crc kubenswrapper[4833]: I0217 14:02:02.345702 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/38b744d8-d892-4318-bb73-8a8c3e9f22d4-util\") pod \"4cfa81efd7a1b8ba99823c5bbcbfa8e1271e52459ab2f2e46cc7719a7dctm88\" (UID: \"38b744d8-d892-4318-bb73-8a8c3e9f22d4\") " pod="openstack-operators/4cfa81efd7a1b8ba99823c5bbcbfa8e1271e52459ab2f2e46cc7719a7dctm88" Feb 17 14:02:02 crc kubenswrapper[4833]: I0217 14:02:02.446960 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/38b744d8-d892-4318-bb73-8a8c3e9f22d4-util\") pod \"4cfa81efd7a1b8ba99823c5bbcbfa8e1271e52459ab2f2e46cc7719a7dctm88\" (UID: \"38b744d8-d892-4318-bb73-8a8c3e9f22d4\") " pod="openstack-operators/4cfa81efd7a1b8ba99823c5bbcbfa8e1271e52459ab2f2e46cc7719a7dctm88" Feb 17 14:02:02 crc kubenswrapper[4833]: I0217 14:02:02.447189 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltfgb\" (UniqueName: \"kubernetes.io/projected/38b744d8-d892-4318-bb73-8a8c3e9f22d4-kube-api-access-ltfgb\") pod \"4cfa81efd7a1b8ba99823c5bbcbfa8e1271e52459ab2f2e46cc7719a7dctm88\" (UID: \"38b744d8-d892-4318-bb73-8a8c3e9f22d4\") " pod="openstack-operators/4cfa81efd7a1b8ba99823c5bbcbfa8e1271e52459ab2f2e46cc7719a7dctm88" Feb 17 14:02:02 crc kubenswrapper[4833]: I0217 14:02:02.447249 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/38b744d8-d892-4318-bb73-8a8c3e9f22d4-bundle\") pod \"4cfa81efd7a1b8ba99823c5bbcbfa8e1271e52459ab2f2e46cc7719a7dctm88\" (UID: \"38b744d8-d892-4318-bb73-8a8c3e9f22d4\") " pod="openstack-operators/4cfa81efd7a1b8ba99823c5bbcbfa8e1271e52459ab2f2e46cc7719a7dctm88" Feb 17 14:02:02 crc kubenswrapper[4833]: I0217 14:02:02.447674 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/38b744d8-d892-4318-bb73-8a8c3e9f22d4-util\") pod \"4cfa81efd7a1b8ba99823c5bbcbfa8e1271e52459ab2f2e46cc7719a7dctm88\" (UID: \"38b744d8-d892-4318-bb73-8a8c3e9f22d4\") " pod="openstack-operators/4cfa81efd7a1b8ba99823c5bbcbfa8e1271e52459ab2f2e46cc7719a7dctm88" Feb 17 14:02:02 crc kubenswrapper[4833]: I0217 14:02:02.447733 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/38b744d8-d892-4318-bb73-8a8c3e9f22d4-bundle\") pod \"4cfa81efd7a1b8ba99823c5bbcbfa8e1271e52459ab2f2e46cc7719a7dctm88\" (UID: \"38b744d8-d892-4318-bb73-8a8c3e9f22d4\") " pod="openstack-operators/4cfa81efd7a1b8ba99823c5bbcbfa8e1271e52459ab2f2e46cc7719a7dctm88" Feb 17 14:02:02 crc kubenswrapper[4833]: I0217 14:02:02.468976 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltfgb\" (UniqueName: \"kubernetes.io/projected/38b744d8-d892-4318-bb73-8a8c3e9f22d4-kube-api-access-ltfgb\") pod \"4cfa81efd7a1b8ba99823c5bbcbfa8e1271e52459ab2f2e46cc7719a7dctm88\" (UID: \"38b744d8-d892-4318-bb73-8a8c3e9f22d4\") " pod="openstack-operators/4cfa81efd7a1b8ba99823c5bbcbfa8e1271e52459ab2f2e46cc7719a7dctm88" Feb 17 14:02:02 crc kubenswrapper[4833]: I0217 14:02:02.614913 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/4cfa81efd7a1b8ba99823c5bbcbfa8e1271e52459ab2f2e46cc7719a7dctm88" Feb 17 14:02:03 crc kubenswrapper[4833]: I0217 14:02:03.067801 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/4cfa81efd7a1b8ba99823c5bbcbfa8e1271e52459ab2f2e46cc7719a7dctm88"] Feb 17 14:02:03 crc kubenswrapper[4833]: I0217 14:02:03.311951 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/4cfa81efd7a1b8ba99823c5bbcbfa8e1271e52459ab2f2e46cc7719a7dctm88" event={"ID":"38b744d8-d892-4318-bb73-8a8c3e9f22d4","Type":"ContainerStarted","Data":"865bb46378e5c5a641c22fec9c7b974f9d59213de9cd7ded0105338e1395a59d"} Feb 17 14:02:04 crc kubenswrapper[4833]: I0217 14:02:04.320316 4833 generic.go:334] "Generic (PLEG): container finished" podID="38b744d8-d892-4318-bb73-8a8c3e9f22d4" containerID="09a75c0924b1334b5875dc03138f205c5824851a35318459ad8f222bae02d123" exitCode=0 Feb 17 14:02:04 crc kubenswrapper[4833]: I0217 14:02:04.320373 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/4cfa81efd7a1b8ba99823c5bbcbfa8e1271e52459ab2f2e46cc7719a7dctm88" event={"ID":"38b744d8-d892-4318-bb73-8a8c3e9f22d4","Type":"ContainerDied","Data":"09a75c0924b1334b5875dc03138f205c5824851a35318459ad8f222bae02d123"} Feb 17 14:02:05 crc kubenswrapper[4833]: I0217 14:02:05.328668 4833 generic.go:334] "Generic (PLEG): container finished" podID="38b744d8-d892-4318-bb73-8a8c3e9f22d4" containerID="ce17f957f84f062287d40548ffbb318cfe01a7da43166207147b58a512e46aeb" exitCode=0 Feb 17 14:02:05 crc kubenswrapper[4833]: I0217 14:02:05.328995 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/4cfa81efd7a1b8ba99823c5bbcbfa8e1271e52459ab2f2e46cc7719a7dctm88" event={"ID":"38b744d8-d892-4318-bb73-8a8c3e9f22d4","Type":"ContainerDied","Data":"ce17f957f84f062287d40548ffbb318cfe01a7da43166207147b58a512e46aeb"} Feb 17 14:02:06 crc kubenswrapper[4833]: I0217 14:02:06.338879 4833 generic.go:334] "Generic (PLEG): container finished" podID="38b744d8-d892-4318-bb73-8a8c3e9f22d4" containerID="25e2c3dc7d061c7b26a5c6f8954115c2bc4b51bebfa61ea8abf2455aa2136316" exitCode=0 Feb 17 14:02:06 crc kubenswrapper[4833]: I0217 14:02:06.338927 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/4cfa81efd7a1b8ba99823c5bbcbfa8e1271e52459ab2f2e46cc7719a7dctm88" event={"ID":"38b744d8-d892-4318-bb73-8a8c3e9f22d4","Type":"ContainerDied","Data":"25e2c3dc7d061c7b26a5c6f8954115c2bc4b51bebfa61ea8abf2455aa2136316"} Feb 17 14:02:07 crc kubenswrapper[4833]: I0217 14:02:07.602786 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/4cfa81efd7a1b8ba99823c5bbcbfa8e1271e52459ab2f2e46cc7719a7dctm88" Feb 17 14:02:07 crc kubenswrapper[4833]: I0217 14:02:07.717628 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/38b744d8-d892-4318-bb73-8a8c3e9f22d4-bundle\") pod \"38b744d8-d892-4318-bb73-8a8c3e9f22d4\" (UID: \"38b744d8-d892-4318-bb73-8a8c3e9f22d4\") " Feb 17 14:02:07 crc kubenswrapper[4833]: I0217 14:02:07.717733 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/38b744d8-d892-4318-bb73-8a8c3e9f22d4-util\") pod \"38b744d8-d892-4318-bb73-8a8c3e9f22d4\" (UID: \"38b744d8-d892-4318-bb73-8a8c3e9f22d4\") " Feb 17 14:02:07 crc kubenswrapper[4833]: I0217 14:02:07.717772 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltfgb\" (UniqueName: \"kubernetes.io/projected/38b744d8-d892-4318-bb73-8a8c3e9f22d4-kube-api-access-ltfgb\") pod \"38b744d8-d892-4318-bb73-8a8c3e9f22d4\" (UID: \"38b744d8-d892-4318-bb73-8a8c3e9f22d4\") " Feb 17 14:02:07 crc kubenswrapper[4833]: I0217 14:02:07.719588 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38b744d8-d892-4318-bb73-8a8c3e9f22d4-bundle" (OuterVolumeSpecName: "bundle") pod "38b744d8-d892-4318-bb73-8a8c3e9f22d4" (UID: "38b744d8-d892-4318-bb73-8a8c3e9f22d4"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:02:07 crc kubenswrapper[4833]: I0217 14:02:07.723338 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38b744d8-d892-4318-bb73-8a8c3e9f22d4-kube-api-access-ltfgb" (OuterVolumeSpecName: "kube-api-access-ltfgb") pod "38b744d8-d892-4318-bb73-8a8c3e9f22d4" (UID: "38b744d8-d892-4318-bb73-8a8c3e9f22d4"). InnerVolumeSpecName "kube-api-access-ltfgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:02:07 crc kubenswrapper[4833]: I0217 14:02:07.732381 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38b744d8-d892-4318-bb73-8a8c3e9f22d4-util" (OuterVolumeSpecName: "util") pod "38b744d8-d892-4318-bb73-8a8c3e9f22d4" (UID: "38b744d8-d892-4318-bb73-8a8c3e9f22d4"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:02:07 crc kubenswrapper[4833]: I0217 14:02:07.819399 4833 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/38b744d8-d892-4318-bb73-8a8c3e9f22d4-util\") on node \"crc\" DevicePath \"\"" Feb 17 14:02:07 crc kubenswrapper[4833]: I0217 14:02:07.819427 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltfgb\" (UniqueName: \"kubernetes.io/projected/38b744d8-d892-4318-bb73-8a8c3e9f22d4-kube-api-access-ltfgb\") on node \"crc\" DevicePath \"\"" Feb 17 14:02:07 crc kubenswrapper[4833]: I0217 14:02:07.819437 4833 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/38b744d8-d892-4318-bb73-8a8c3e9f22d4-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 14:02:08 crc kubenswrapper[4833]: I0217 14:02:08.357391 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/4cfa81efd7a1b8ba99823c5bbcbfa8e1271e52459ab2f2e46cc7719a7dctm88" event={"ID":"38b744d8-d892-4318-bb73-8a8c3e9f22d4","Type":"ContainerDied","Data":"865bb46378e5c5a641c22fec9c7b974f9d59213de9cd7ded0105338e1395a59d"} Feb 17 14:02:08 crc kubenswrapper[4833]: I0217 14:02:08.357751 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="865bb46378e5c5a641c22fec9c7b974f9d59213de9cd7ded0105338e1395a59d" Feb 17 14:02:08 crc kubenswrapper[4833]: I0217 14:02:08.357451 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/4cfa81efd7a1b8ba99823c5bbcbfa8e1271e52459ab2f2e46cc7719a7dctm88" Feb 17 14:02:44 crc kubenswrapper[4833]: I0217 14:02:44.244097 4833 patch_prober.go:28] interesting pod/machine-config-daemon-nmzvl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 14:02:44 crc kubenswrapper[4833]: I0217 14:02:44.244611 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" podUID="f4a1ca83-1919-4f9c-82de-c849cbd50e70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 14:03:14 crc kubenswrapper[4833]: I0217 14:03:14.245750 4833 patch_prober.go:28] interesting pod/machine-config-daemon-nmzvl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 14:03:14 crc kubenswrapper[4833]: I0217 14:03:14.246292 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" podUID="f4a1ca83-1919-4f9c-82de-c849cbd50e70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 14:03:44 crc kubenswrapper[4833]: I0217 14:03:44.243724 4833 patch_prober.go:28] interesting pod/machine-config-daemon-nmzvl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 14:03:44 crc kubenswrapper[4833]: I0217 14:03:44.244298 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" podUID="f4a1ca83-1919-4f9c-82de-c849cbd50e70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 14:03:44 crc kubenswrapper[4833]: I0217 14:03:44.244349 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" Feb 17 14:03:44 crc kubenswrapper[4833]: I0217 14:03:44.244992 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1d59ec5097f9c4d0a402367427ee7a192fa778075aee8b4748e812d794a46746"} pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 14:03:44 crc kubenswrapper[4833]: I0217 14:03:44.245084 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" podUID="f4a1ca83-1919-4f9c-82de-c849cbd50e70" containerName="machine-config-daemon" containerID="cri-o://1d59ec5097f9c4d0a402367427ee7a192fa778075aee8b4748e812d794a46746" gracePeriod=600 Feb 17 14:03:45 crc kubenswrapper[4833]: I0217 14:03:45.206679 4833 generic.go:334] "Generic (PLEG): container finished" podID="f4a1ca83-1919-4f9c-82de-c849cbd50e70" containerID="1d59ec5097f9c4d0a402367427ee7a192fa778075aee8b4748e812d794a46746" exitCode=0 Feb 17 14:03:45 crc kubenswrapper[4833]: I0217 14:03:45.206746 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" event={"ID":"f4a1ca83-1919-4f9c-82de-c849cbd50e70","Type":"ContainerDied","Data":"1d59ec5097f9c4d0a402367427ee7a192fa778075aee8b4748e812d794a46746"} Feb 17 14:03:45 crc kubenswrapper[4833]: I0217 14:03:45.207216 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" event={"ID":"f4a1ca83-1919-4f9c-82de-c849cbd50e70","Type":"ContainerStarted","Data":"8c37cbc628a91e3ba5634ce1d9a94ebdc5f36667fc0a41e44693193f64171282"} Feb 17 14:03:45 crc kubenswrapper[4833]: I0217 14:03:45.207251 4833 scope.go:117] "RemoveContainer" containerID="1a5e1a9a5d589559159e0a6c8e522d3462758f7e97a786e9713efff02185b2a7" Feb 17 14:05:44 crc kubenswrapper[4833]: I0217 14:05:44.243962 4833 patch_prober.go:28] interesting pod/machine-config-daemon-nmzvl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 14:05:44 crc kubenswrapper[4833]: I0217 14:05:44.244532 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" podUID="f4a1ca83-1919-4f9c-82de-c849cbd50e70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 14:06:14 crc kubenswrapper[4833]: I0217 14:06:14.244254 4833 patch_prober.go:28] interesting pod/machine-config-daemon-nmzvl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 14:06:14 crc kubenswrapper[4833]: I0217 14:06:14.245759 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" podUID="f4a1ca83-1919-4f9c-82de-c849cbd50e70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 14:06:44 crc kubenswrapper[4833]: I0217 14:06:44.243383 4833 patch_prober.go:28] interesting pod/machine-config-daemon-nmzvl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 14:06:44 crc kubenswrapper[4833]: I0217 14:06:44.243928 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" podUID="f4a1ca83-1919-4f9c-82de-c849cbd50e70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 14:06:44 crc kubenswrapper[4833]: I0217 14:06:44.243965 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" Feb 17 14:06:44 crc kubenswrapper[4833]: I0217 14:06:44.244534 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8c37cbc628a91e3ba5634ce1d9a94ebdc5f36667fc0a41e44693193f64171282"} pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 14:06:44 crc kubenswrapper[4833]: I0217 14:06:44.244589 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" podUID="f4a1ca83-1919-4f9c-82de-c849cbd50e70" containerName="machine-config-daemon" containerID="cri-o://8c37cbc628a91e3ba5634ce1d9a94ebdc5f36667fc0a41e44693193f64171282" gracePeriod=600 Feb 17 14:06:44 crc kubenswrapper[4833]: I0217 14:06:44.445115 4833 generic.go:334] "Generic (PLEG): container finished" podID="f4a1ca83-1919-4f9c-82de-c849cbd50e70" containerID="8c37cbc628a91e3ba5634ce1d9a94ebdc5f36667fc0a41e44693193f64171282" exitCode=0 Feb 17 14:06:44 crc kubenswrapper[4833]: I0217 14:06:44.445153 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" event={"ID":"f4a1ca83-1919-4f9c-82de-c849cbd50e70","Type":"ContainerDied","Data":"8c37cbc628a91e3ba5634ce1d9a94ebdc5f36667fc0a41e44693193f64171282"} Feb 17 14:06:44 crc kubenswrapper[4833]: I0217 14:06:44.445183 4833 scope.go:117] "RemoveContainer" containerID="1d59ec5097f9c4d0a402367427ee7a192fa778075aee8b4748e812d794a46746" Feb 17 14:06:45 crc kubenswrapper[4833]: I0217 14:06:45.453333 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" event={"ID":"f4a1ca83-1919-4f9c-82de-c849cbd50e70","Type":"ContainerStarted","Data":"e53af8c62aa13ae97cf8832a68e96b811fc441a187026ab07c19f017c0e40d9d"} Feb 17 14:07:07 crc kubenswrapper[4833]: I0217 14:07:07.051331 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-mtcdf/must-gather-4nb2z"] Feb 17 14:07:07 crc kubenswrapper[4833]: E0217 14:07:07.052125 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38b744d8-d892-4318-bb73-8a8c3e9f22d4" containerName="pull" Feb 17 14:07:07 crc kubenswrapper[4833]: I0217 14:07:07.052140 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="38b744d8-d892-4318-bb73-8a8c3e9f22d4" containerName="pull" Feb 17 14:07:07 crc kubenswrapper[4833]: E0217 14:07:07.052174 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38b744d8-d892-4318-bb73-8a8c3e9f22d4" containerName="util" Feb 17 14:07:07 crc kubenswrapper[4833]: I0217 14:07:07.052182 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="38b744d8-d892-4318-bb73-8a8c3e9f22d4" containerName="util" Feb 17 14:07:07 crc kubenswrapper[4833]: E0217 14:07:07.052198 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38b744d8-d892-4318-bb73-8a8c3e9f22d4" containerName="extract" Feb 17 14:07:07 crc kubenswrapper[4833]: I0217 14:07:07.052209 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="38b744d8-d892-4318-bb73-8a8c3e9f22d4" containerName="extract" Feb 17 14:07:07 crc kubenswrapper[4833]: I0217 14:07:07.052368 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="38b744d8-d892-4318-bb73-8a8c3e9f22d4" containerName="extract" Feb 17 14:07:07 crc kubenswrapper[4833]: I0217 14:07:07.053165 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mtcdf/must-gather-4nb2z" Feb 17 14:07:07 crc kubenswrapper[4833]: I0217 14:07:07.054792 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-mtcdf"/"kube-root-ca.crt" Feb 17 14:07:07 crc kubenswrapper[4833]: I0217 14:07:07.055603 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-mtcdf"/"openshift-service-ca.crt" Feb 17 14:07:07 crc kubenswrapper[4833]: I0217 14:07:07.055839 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-mtcdf"/"default-dockercfg-qvz77" Feb 17 14:07:07 crc kubenswrapper[4833]: I0217 14:07:07.069545 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-mtcdf/must-gather-4nb2z"] Feb 17 14:07:07 crc kubenswrapper[4833]: I0217 14:07:07.073952 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbt6q\" (UniqueName: \"kubernetes.io/projected/f461fd42-3f7e-4bfb-8cd3-34a3524e27ad-kube-api-access-jbt6q\") pod \"must-gather-4nb2z\" (UID: \"f461fd42-3f7e-4bfb-8cd3-34a3524e27ad\") " pod="openshift-must-gather-mtcdf/must-gather-4nb2z" Feb 17 14:07:07 crc kubenswrapper[4833]: I0217 14:07:07.074117 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f461fd42-3f7e-4bfb-8cd3-34a3524e27ad-must-gather-output\") pod \"must-gather-4nb2z\" (UID: \"f461fd42-3f7e-4bfb-8cd3-34a3524e27ad\") " pod="openshift-must-gather-mtcdf/must-gather-4nb2z" Feb 17 14:07:07 crc kubenswrapper[4833]: I0217 14:07:07.175244 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbt6q\" (UniqueName: \"kubernetes.io/projected/f461fd42-3f7e-4bfb-8cd3-34a3524e27ad-kube-api-access-jbt6q\") pod \"must-gather-4nb2z\" (UID: \"f461fd42-3f7e-4bfb-8cd3-34a3524e27ad\") " pod="openshift-must-gather-mtcdf/must-gather-4nb2z" Feb 17 14:07:07 crc kubenswrapper[4833]: I0217 14:07:07.175362 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f461fd42-3f7e-4bfb-8cd3-34a3524e27ad-must-gather-output\") pod \"must-gather-4nb2z\" (UID: \"f461fd42-3f7e-4bfb-8cd3-34a3524e27ad\") " pod="openshift-must-gather-mtcdf/must-gather-4nb2z" Feb 17 14:07:07 crc kubenswrapper[4833]: I0217 14:07:07.175755 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f461fd42-3f7e-4bfb-8cd3-34a3524e27ad-must-gather-output\") pod \"must-gather-4nb2z\" (UID: \"f461fd42-3f7e-4bfb-8cd3-34a3524e27ad\") " pod="openshift-must-gather-mtcdf/must-gather-4nb2z" Feb 17 14:07:07 crc kubenswrapper[4833]: I0217 14:07:07.193677 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbt6q\" (UniqueName: \"kubernetes.io/projected/f461fd42-3f7e-4bfb-8cd3-34a3524e27ad-kube-api-access-jbt6q\") pod \"must-gather-4nb2z\" (UID: \"f461fd42-3f7e-4bfb-8cd3-34a3524e27ad\") " pod="openshift-must-gather-mtcdf/must-gather-4nb2z" Feb 17 14:07:07 crc kubenswrapper[4833]: I0217 14:07:07.373025 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mtcdf/must-gather-4nb2z" Feb 17 14:07:07 crc kubenswrapper[4833]: I0217 14:07:07.827293 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-mtcdf/must-gather-4nb2z"] Feb 17 14:07:07 crc kubenswrapper[4833]: I0217 14:07:07.835155 4833 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 14:07:08 crc kubenswrapper[4833]: I0217 14:07:08.616073 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mtcdf/must-gather-4nb2z" event={"ID":"f461fd42-3f7e-4bfb-8cd3-34a3524e27ad","Type":"ContainerStarted","Data":"be3535b57a320685714e4f9e07bd51f3cd1f96851554a9d8784af49fce67dcdb"} Feb 17 14:07:16 crc kubenswrapper[4833]: I0217 14:07:16.687670 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mtcdf/must-gather-4nb2z" event={"ID":"f461fd42-3f7e-4bfb-8cd3-34a3524e27ad","Type":"ContainerStarted","Data":"66abcf5abb12714ffe2614fa4541b2247efb9fc2497b573485bd8fcda672ed53"} Feb 17 14:07:16 crc kubenswrapper[4833]: I0217 14:07:16.688190 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mtcdf/must-gather-4nb2z" event={"ID":"f461fd42-3f7e-4bfb-8cd3-34a3524e27ad","Type":"ContainerStarted","Data":"038e30feb973382cc3a0fad208909871faf017da9662cbcb8ef478e8958b56a1"} Feb 17 14:07:16 crc kubenswrapper[4833]: I0217 14:07:16.712538 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-mtcdf/must-gather-4nb2z" podStartSLOduration=1.804656287 podStartE2EDuration="9.712519435s" podCreationTimestamp="2026-02-17 14:07:07 +0000 UTC" firstStartedPulling="2026-02-17 14:07:07.834873976 +0000 UTC m=+1317.469973409" lastFinishedPulling="2026-02-17 14:07:15.742737124 +0000 UTC m=+1325.377836557" observedRunningTime="2026-02-17 14:07:16.708608753 +0000 UTC m=+1326.343708186" watchObservedRunningTime="2026-02-17 14:07:16.712519435 +0000 UTC m=+1326.347618858" Feb 17 14:08:14 crc kubenswrapper[4833]: I0217 14:08:14.659703 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rrnwf"] Feb 17 14:08:14 crc kubenswrapper[4833]: I0217 14:08:14.661833 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rrnwf" Feb 17 14:08:14 crc kubenswrapper[4833]: I0217 14:08:14.684159 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rrnwf"] Feb 17 14:08:14 crc kubenswrapper[4833]: I0217 14:08:14.693078 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d23b440e-abdf-4558-92cc-204bdab1dadb-utilities\") pod \"redhat-operators-rrnwf\" (UID: \"d23b440e-abdf-4558-92cc-204bdab1dadb\") " pod="openshift-marketplace/redhat-operators-rrnwf" Feb 17 14:08:14 crc kubenswrapper[4833]: I0217 14:08:14.693379 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8669n\" (UniqueName: \"kubernetes.io/projected/d23b440e-abdf-4558-92cc-204bdab1dadb-kube-api-access-8669n\") pod \"redhat-operators-rrnwf\" (UID: \"d23b440e-abdf-4558-92cc-204bdab1dadb\") " pod="openshift-marketplace/redhat-operators-rrnwf" Feb 17 14:08:14 crc kubenswrapper[4833]: I0217 14:08:14.693539 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d23b440e-abdf-4558-92cc-204bdab1dadb-catalog-content\") pod \"redhat-operators-rrnwf\" (UID: \"d23b440e-abdf-4558-92cc-204bdab1dadb\") " pod="openshift-marketplace/redhat-operators-rrnwf" Feb 17 14:08:14 crc kubenswrapper[4833]: I0217 14:08:14.794422 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d23b440e-abdf-4558-92cc-204bdab1dadb-utilities\") pod \"redhat-operators-rrnwf\" (UID: \"d23b440e-abdf-4558-92cc-204bdab1dadb\") " pod="openshift-marketplace/redhat-operators-rrnwf" Feb 17 14:08:14 crc kubenswrapper[4833]: I0217 14:08:14.794498 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8669n\" (UniqueName: \"kubernetes.io/projected/d23b440e-abdf-4558-92cc-204bdab1dadb-kube-api-access-8669n\") pod \"redhat-operators-rrnwf\" (UID: \"d23b440e-abdf-4558-92cc-204bdab1dadb\") " pod="openshift-marketplace/redhat-operators-rrnwf" Feb 17 14:08:14 crc kubenswrapper[4833]: I0217 14:08:14.794581 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d23b440e-abdf-4558-92cc-204bdab1dadb-catalog-content\") pod \"redhat-operators-rrnwf\" (UID: \"d23b440e-abdf-4558-92cc-204bdab1dadb\") " pod="openshift-marketplace/redhat-operators-rrnwf" Feb 17 14:08:14 crc kubenswrapper[4833]: I0217 14:08:14.794985 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d23b440e-abdf-4558-92cc-204bdab1dadb-utilities\") pod \"redhat-operators-rrnwf\" (UID: \"d23b440e-abdf-4558-92cc-204bdab1dadb\") " pod="openshift-marketplace/redhat-operators-rrnwf" Feb 17 14:08:14 crc kubenswrapper[4833]: I0217 14:08:14.795024 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d23b440e-abdf-4558-92cc-204bdab1dadb-catalog-content\") pod \"redhat-operators-rrnwf\" (UID: \"d23b440e-abdf-4558-92cc-204bdab1dadb\") " pod="openshift-marketplace/redhat-operators-rrnwf" Feb 17 14:08:14 crc kubenswrapper[4833]: I0217 14:08:14.815077 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8669n\" (UniqueName: \"kubernetes.io/projected/d23b440e-abdf-4558-92cc-204bdab1dadb-kube-api-access-8669n\") pod \"redhat-operators-rrnwf\" (UID: \"d23b440e-abdf-4558-92cc-204bdab1dadb\") " pod="openshift-marketplace/redhat-operators-rrnwf" Feb 17 14:08:14 crc kubenswrapper[4833]: I0217 14:08:14.985309 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rrnwf" Feb 17 14:08:15 crc kubenswrapper[4833]: I0217 14:08:15.243853 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rrnwf"] Feb 17 14:08:15 crc kubenswrapper[4833]: I0217 14:08:15.616329 4833 generic.go:334] "Generic (PLEG): container finished" podID="d23b440e-abdf-4558-92cc-204bdab1dadb" containerID="663286fa3b62d04f19954ed1840cedb3e9a32c46a8f4653cf3af6bb49e6ab7b2" exitCode=0 Feb 17 14:08:15 crc kubenswrapper[4833]: I0217 14:08:15.616417 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rrnwf" event={"ID":"d23b440e-abdf-4558-92cc-204bdab1dadb","Type":"ContainerDied","Data":"663286fa3b62d04f19954ed1840cedb3e9a32c46a8f4653cf3af6bb49e6ab7b2"} Feb 17 14:08:15 crc kubenswrapper[4833]: I0217 14:08:15.616642 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rrnwf" event={"ID":"d23b440e-abdf-4558-92cc-204bdab1dadb","Type":"ContainerStarted","Data":"48bfdf1ec4670fdf215ded88edbb4db36da1c42724185448af27af33d4dc5f48"} Feb 17 14:08:16 crc kubenswrapper[4833]: I0217 14:08:16.624539 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rrnwf" event={"ID":"d23b440e-abdf-4558-92cc-204bdab1dadb","Type":"ContainerStarted","Data":"85f64302e17123d30c944680302f9387e548de2beb0859def72c3dadea0aac76"} Feb 17 14:08:17 crc kubenswrapper[4833]: I0217 14:08:17.633909 4833 generic.go:334] "Generic (PLEG): container finished" podID="d23b440e-abdf-4558-92cc-204bdab1dadb" containerID="85f64302e17123d30c944680302f9387e548de2beb0859def72c3dadea0aac76" exitCode=0 Feb 17 14:08:17 crc kubenswrapper[4833]: I0217 14:08:17.633957 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rrnwf" event={"ID":"d23b440e-abdf-4558-92cc-204bdab1dadb","Type":"ContainerDied","Data":"85f64302e17123d30c944680302f9387e548de2beb0859def72c3dadea0aac76"} Feb 17 14:08:18 crc kubenswrapper[4833]: I0217 14:08:18.643178 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rrnwf" event={"ID":"d23b440e-abdf-4558-92cc-204bdab1dadb","Type":"ContainerStarted","Data":"90b99c8344060fa506aad95e452aba3c8b17e6d84260ccf28b6ece3b8751bbb7"} Feb 17 14:08:18 crc kubenswrapper[4833]: I0217 14:08:18.662922 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rrnwf" podStartSLOduration=2.246361677 podStartE2EDuration="4.662901998s" podCreationTimestamp="2026-02-17 14:08:14 +0000 UTC" firstStartedPulling="2026-02-17 14:08:15.617585052 +0000 UTC m=+1385.252684485" lastFinishedPulling="2026-02-17 14:08:18.034125383 +0000 UTC m=+1387.669224806" observedRunningTime="2026-02-17 14:08:18.658260116 +0000 UTC m=+1388.293359569" watchObservedRunningTime="2026-02-17 14:08:18.662901998 +0000 UTC m=+1388.298001441" Feb 17 14:08:24 crc kubenswrapper[4833]: I0217 14:08:24.005833 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4cfa81efd7a1b8ba99823c5bbcbfa8e1271e52459ab2f2e46cc7719a7dctm88_38b744d8-d892-4318-bb73-8a8c3e9f22d4/util/0.log" Feb 17 14:08:24 crc kubenswrapper[4833]: I0217 14:08:24.433750 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4cfa81efd7a1b8ba99823c5bbcbfa8e1271e52459ab2f2e46cc7719a7dctm88_38b744d8-d892-4318-bb73-8a8c3e9f22d4/pull/0.log" Feb 17 14:08:24 crc kubenswrapper[4833]: I0217 14:08:24.470161 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4cfa81efd7a1b8ba99823c5bbcbfa8e1271e52459ab2f2e46cc7719a7dctm88_38b744d8-d892-4318-bb73-8a8c3e9f22d4/util/0.log" Feb 17 14:08:24 crc kubenswrapper[4833]: I0217 14:08:24.493029 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4cfa81efd7a1b8ba99823c5bbcbfa8e1271e52459ab2f2e46cc7719a7dctm88_38b744d8-d892-4318-bb73-8a8c3e9f22d4/pull/0.log" Feb 17 14:08:24 crc kubenswrapper[4833]: I0217 14:08:24.663280 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4cfa81efd7a1b8ba99823c5bbcbfa8e1271e52459ab2f2e46cc7719a7dctm88_38b744d8-d892-4318-bb73-8a8c3e9f22d4/pull/0.log" Feb 17 14:08:24 crc kubenswrapper[4833]: I0217 14:08:24.681725 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4cfa81efd7a1b8ba99823c5bbcbfa8e1271e52459ab2f2e46cc7719a7dctm88_38b744d8-d892-4318-bb73-8a8c3e9f22d4/util/0.log" Feb 17 14:08:24 crc kubenswrapper[4833]: I0217 14:08:24.750468 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4cfa81efd7a1b8ba99823c5bbcbfa8e1271e52459ab2f2e46cc7719a7dctm88_38b744d8-d892-4318-bb73-8a8c3e9f22d4/extract/0.log" Feb 17 14:08:24 crc kubenswrapper[4833]: I0217 14:08:24.866472 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_76efda0fbc8a5f96c6c91adb000f6e534407faa1f502e379d8598e96desmktq_fea10cca-4838-469c-9988-4f1c15e2d66d/util/0.log" Feb 17 14:08:24 crc kubenswrapper[4833]: I0217 14:08:24.986348 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rrnwf" Feb 17 14:08:24 crc kubenswrapper[4833]: I0217 14:08:24.986407 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rrnwf" Feb 17 14:08:25 crc kubenswrapper[4833]: I0217 14:08:25.035570 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rrnwf" Feb 17 14:08:25 crc kubenswrapper[4833]: I0217 14:08:25.043405 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_76efda0fbc8a5f96c6c91adb000f6e534407faa1f502e379d8598e96desmktq_fea10cca-4838-469c-9988-4f1c15e2d66d/pull/0.log" Feb 17 14:08:25 crc kubenswrapper[4833]: I0217 14:08:25.099190 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_76efda0fbc8a5f96c6c91adb000f6e534407faa1f502e379d8598e96desmktq_fea10cca-4838-469c-9988-4f1c15e2d66d/util/0.log" Feb 17 14:08:25 crc kubenswrapper[4833]: I0217 14:08:25.120772 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_76efda0fbc8a5f96c6c91adb000f6e534407faa1f502e379d8598e96desmktq_fea10cca-4838-469c-9988-4f1c15e2d66d/pull/0.log" Feb 17 14:08:25 crc kubenswrapper[4833]: I0217 14:08:25.301804 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_76efda0fbc8a5f96c6c91adb000f6e534407faa1f502e379d8598e96desmktq_fea10cca-4838-469c-9988-4f1c15e2d66d/pull/0.log" Feb 17 14:08:25 crc kubenswrapper[4833]: I0217 14:08:25.309629 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_76efda0fbc8a5f96c6c91adb000f6e534407faa1f502e379d8598e96desmktq_fea10cca-4838-469c-9988-4f1c15e2d66d/util/0.log" Feb 17 14:08:25 crc kubenswrapper[4833]: I0217 14:08:25.350197 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_76efda0fbc8a5f96c6c91adb000f6e534407faa1f502e379d8598e96desmktq_fea10cca-4838-469c-9988-4f1c15e2d66d/extract/0.log" Feb 17 14:08:25 crc kubenswrapper[4833]: I0217 14:08:25.729654 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rrnwf" Feb 17 14:08:25 crc kubenswrapper[4833]: I0217 14:08:25.975286 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-55cc45767f-hdxb7_21fdadaa-0128-4daa-9dd8-2d4f7c600c99/manager/0.log" Feb 17 14:08:26 crc kubenswrapper[4833]: I0217 14:08:26.176319 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-68c6d499cb-zqqxg_35e45f6b-d546-4818-94a1-c8490e83a786/manager/0.log" Feb 17 14:08:26 crc kubenswrapper[4833]: I0217 14:08:26.266150 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-57746b5ff9-jjchm_df534d1f-b343-4090-b8ad-114864ea82ec/manager/0.log" Feb 17 14:08:26 crc kubenswrapper[4833]: I0217 14:08:26.408259 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-9595d6797-798bv_d98ee2cd-c429-474d-a6a8-a7f9bc1ed559/manager/0.log" Feb 17 14:08:26 crc kubenswrapper[4833]: I0217 14:08:26.576022 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-54fb488b88-kkrrz_2acb502c-ae3f-4b68-b2c8-aaebcddc54d8/manager/0.log" Feb 17 14:08:26 crc kubenswrapper[4833]: I0217 14:08:26.825466 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-66d6b5f488-87mt8_ed6e734c-3eb1-47d0-ad82-aa5c934da55a/manager/0.log" Feb 17 14:08:27 crc kubenswrapper[4833]: I0217 14:08:27.082555 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6494cdbf8f-5lrx9_19d72e8b-fb46-4e69-bcbf-90eae727d08f/manager/0.log" Feb 17 14:08:27 crc kubenswrapper[4833]: I0217 14:08:27.313401 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-6c78d668d5-pc29h_3c850072-fbc2-4f6a-a59c-51498bc5aef2/manager/0.log" Feb 17 14:08:27 crc kubenswrapper[4833]: I0217 14:08:27.473176 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-96fff9cb8-rq2hb_056f0c10-839f-4bdf-9b0e-2c6eabb209a8/manager/0.log" Feb 17 14:08:27 crc kubenswrapper[4833]: I0217 14:08:27.637445 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-66997756f6-brrjj_92da7367-c0eb-4ad3-a2d9-aebe2540cb11/manager/0.log" Feb 17 14:08:27 crc kubenswrapper[4833]: I0217 14:08:27.837828 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-54967dbbdf-bffkj_6768cf80-4466-4e34-b1f5-fd7b4728b6ff/manager/0.log" Feb 17 14:08:28 crc kubenswrapper[4833]: I0217 14:08:28.413799 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-5ddd85db87-kst2n_22eaba8f-f822-42a0-a01f-4b1bdd5c0728/manager/0.log" Feb 17 14:08:28 crc kubenswrapper[4833]: I0217 14:08:28.514282 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-c4b7d6946-trcbs_6504a5be-090f-4500-adaa-9ace23ba69f3/manager/0.log" Feb 17 14:08:28 crc kubenswrapper[4833]: I0217 14:08:28.667209 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rrnwf"] Feb 17 14:08:28 crc kubenswrapper[4833]: I0217 14:08:28.667504 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rrnwf" podUID="d23b440e-abdf-4558-92cc-204bdab1dadb" containerName="registry-server" containerID="cri-o://90b99c8344060fa506aad95e452aba3c8b17e6d84260ccf28b6ece3b8751bbb7" gracePeriod=2 Feb 17 14:08:28 crc kubenswrapper[4833]: I0217 14:08:28.741648 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-c5677dc5d-tjdm7_032e7e1b-b6e8-4a69-905a-cd01e516f155/manager/0.log" Feb 17 14:08:28 crc kubenswrapper[4833]: I0217 14:08:28.887746 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5856dc4bfc-jh56r_eaf6f3b9-35a6-4048-94cb-53563475161a/manager/0.log" Feb 17 14:08:28 crc kubenswrapper[4833]: I0217 14:08:28.982762 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-54l4d_8a60a0a6-6ea5-4412-9ff0-b384e78d3c47/registry-server/0.log" Feb 17 14:08:28 crc kubenswrapper[4833]: I0217 14:08:28.987616 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-745bbbd77b-twx52_9985098c-6fe1-4a48-8aaf-6ef63f487c41/manager/0.log" Feb 17 14:08:29 crc kubenswrapper[4833]: I0217 14:08:29.151149 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-85c99d655-s4gk9_c35ceddd-0211-4828-a16e-4ff8d9d9dcd1/manager/0.log" Feb 17 14:08:29 crc kubenswrapper[4833]: I0217 14:08:29.218834 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-57bd55f9b7-dt8tb_773b4953-21e2-42e2-b438-29b340f453ad/manager/0.log" Feb 17 14:08:29 crc kubenswrapper[4833]: I0217 14:08:29.356751 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-bdv7p_c42495df-6d52-41f8-8b47-764a31284526/operator/0.log" Feb 17 14:08:29 crc kubenswrapper[4833]: I0217 14:08:29.420058 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-79558bbfbf-7pspc_c55ae282-60fe-4b05-9bee-5539b9026800/manager/0.log" Feb 17 14:08:29 crc kubenswrapper[4833]: I0217 14:08:29.567026 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-56dc67d744-6sx42_46d10b37-a41e-4063-8a7c-d1f8bf36847f/manager/0.log" Feb 17 14:08:29 crc kubenswrapper[4833]: I0217 14:08:29.644794 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-8467ccb4c8-n4rt9_6f6fed97-8c9b-4262-962a-912fe2777ffc/manager/0.log" Feb 17 14:08:29 crc kubenswrapper[4833]: I0217 14:08:29.701983 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-59669cd6b8-vzhgx_c059d8ff-7282-40c8-8a7e-542d11569869/manager/0.log" Feb 17 14:08:29 crc kubenswrapper[4833]: I0217 14:08:29.718947 4833 generic.go:334] "Generic (PLEG): container finished" podID="d23b440e-abdf-4558-92cc-204bdab1dadb" containerID="90b99c8344060fa506aad95e452aba3c8b17e6d84260ccf28b6ece3b8751bbb7" exitCode=0 Feb 17 14:08:29 crc kubenswrapper[4833]: I0217 14:08:29.718988 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rrnwf" event={"ID":"d23b440e-abdf-4558-92cc-204bdab1dadb","Type":"ContainerDied","Data":"90b99c8344060fa506aad95e452aba3c8b17e6d84260ccf28b6ece3b8751bbb7"} Feb 17 14:08:29 crc kubenswrapper[4833]: I0217 14:08:29.839801 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rrnwf" Feb 17 14:08:29 crc kubenswrapper[4833]: I0217 14:08:29.898542 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d23b440e-abdf-4558-92cc-204bdab1dadb-utilities\") pod \"d23b440e-abdf-4558-92cc-204bdab1dadb\" (UID: \"d23b440e-abdf-4558-92cc-204bdab1dadb\") " Feb 17 14:08:29 crc kubenswrapper[4833]: I0217 14:08:29.898661 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8669n\" (UniqueName: \"kubernetes.io/projected/d23b440e-abdf-4558-92cc-204bdab1dadb-kube-api-access-8669n\") pod \"d23b440e-abdf-4558-92cc-204bdab1dadb\" (UID: \"d23b440e-abdf-4558-92cc-204bdab1dadb\") " Feb 17 14:08:29 crc kubenswrapper[4833]: I0217 14:08:29.898897 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d23b440e-abdf-4558-92cc-204bdab1dadb-catalog-content\") pod \"d23b440e-abdf-4558-92cc-204bdab1dadb\" (UID: \"d23b440e-abdf-4558-92cc-204bdab1dadb\") " Feb 17 14:08:29 crc kubenswrapper[4833]: I0217 14:08:29.900607 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d23b440e-abdf-4558-92cc-204bdab1dadb-utilities" (OuterVolumeSpecName: "utilities") pod "d23b440e-abdf-4558-92cc-204bdab1dadb" (UID: "d23b440e-abdf-4558-92cc-204bdab1dadb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:08:29 crc kubenswrapper[4833]: I0217 14:08:29.908738 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d23b440e-abdf-4558-92cc-204bdab1dadb-kube-api-access-8669n" (OuterVolumeSpecName: "kube-api-access-8669n") pod "d23b440e-abdf-4558-92cc-204bdab1dadb" (UID: "d23b440e-abdf-4558-92cc-204bdab1dadb"). InnerVolumeSpecName "kube-api-access-8669n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:08:29 crc kubenswrapper[4833]: I0217 14:08:29.917645 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-index-nrjjw_573bc2ac-05e9-4a4e-89dd-497972463c4b/registry-server/0.log" Feb 17 14:08:30 crc kubenswrapper[4833]: I0217 14:08:30.000578 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d23b440e-abdf-4558-92cc-204bdab1dadb-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 14:08:30 crc kubenswrapper[4833]: I0217 14:08:30.000609 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8669n\" (UniqueName: \"kubernetes.io/projected/d23b440e-abdf-4558-92cc-204bdab1dadb-kube-api-access-8669n\") on node \"crc\" DevicePath \"\"" Feb 17 14:08:30 crc kubenswrapper[4833]: I0217 14:08:30.049743 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d23b440e-abdf-4558-92cc-204bdab1dadb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d23b440e-abdf-4558-92cc-204bdab1dadb" (UID: "d23b440e-abdf-4558-92cc-204bdab1dadb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:08:30 crc kubenswrapper[4833]: I0217 14:08:30.103934 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d23b440e-abdf-4558-92cc-204bdab1dadb-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 14:08:30 crc kubenswrapper[4833]: I0217 14:08:30.728486 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rrnwf" event={"ID":"d23b440e-abdf-4558-92cc-204bdab1dadb","Type":"ContainerDied","Data":"48bfdf1ec4670fdf215ded88edbb4db36da1c42724185448af27af33d4dc5f48"} Feb 17 14:08:30 crc kubenswrapper[4833]: I0217 14:08:30.728542 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rrnwf" Feb 17 14:08:30 crc kubenswrapper[4833]: I0217 14:08:30.728840 4833 scope.go:117] "RemoveContainer" containerID="90b99c8344060fa506aad95e452aba3c8b17e6d84260ccf28b6ece3b8751bbb7" Feb 17 14:08:30 crc kubenswrapper[4833]: I0217 14:08:30.759070 4833 scope.go:117] "RemoveContainer" containerID="85f64302e17123d30c944680302f9387e548de2beb0859def72c3dadea0aac76" Feb 17 14:08:30 crc kubenswrapper[4833]: I0217 14:08:30.782094 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rrnwf"] Feb 17 14:08:30 crc kubenswrapper[4833]: I0217 14:08:30.792498 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rrnwf"] Feb 17 14:08:30 crc kubenswrapper[4833]: I0217 14:08:30.794623 4833 scope.go:117] "RemoveContainer" containerID="663286fa3b62d04f19954ed1840cedb3e9a32c46a8f4653cf3af6bb49e6ab7b2" Feb 17 14:08:31 crc kubenswrapper[4833]: I0217 14:08:31.051771 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d23b440e-abdf-4558-92cc-204bdab1dadb" path="/var/lib/kubelet/pods/d23b440e-abdf-4558-92cc-204bdab1dadb/volumes" Feb 17 14:08:44 crc kubenswrapper[4833]: I0217 14:08:44.243534 4833 patch_prober.go:28] interesting pod/machine-config-daemon-nmzvl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 14:08:44 crc kubenswrapper[4833]: I0217 14:08:44.244030 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" podUID="f4a1ca83-1919-4f9c-82de-c849cbd50e70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 14:08:51 crc kubenswrapper[4833]: I0217 14:08:51.866352 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-slqbd_3e99f9e5-e0f4-4444-8303-f69571809455/control-plane-machine-set-operator/0.log" Feb 17 14:08:52 crc kubenswrapper[4833]: I0217 14:08:52.089642 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-stnj2_c1e5bc60-f7d5-436d-9298-3099adb6bc0a/machine-api-operator/0.log" Feb 17 14:08:52 crc kubenswrapper[4833]: I0217 14:08:52.116277 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-stnj2_c1e5bc60-f7d5-436d-9298-3099adb6bc0a/kube-rbac-proxy/0.log" Feb 17 14:09:05 crc kubenswrapper[4833]: I0217 14:09:05.817981 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-545d4d4674-pv7vq_3d30ed8b-6183-40ff-89c2-869d421aaf38/cert-manager-controller/0.log" Feb 17 14:09:05 crc kubenswrapper[4833]: I0217 14:09:05.978658 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-5545bd876-xw652_4c52a718-46bf-4c12-9775-4c90e0b60065/cert-manager-cainjector/0.log" Feb 17 14:09:06 crc kubenswrapper[4833]: I0217 14:09:06.016550 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-6888856db4-lqxtp_5deb8fad-a9b9-4caa-be01-c9ab1b66f7cb/cert-manager-webhook/0.log" Feb 17 14:09:14 crc kubenswrapper[4833]: I0217 14:09:14.243989 4833 patch_prober.go:28] interesting pod/machine-config-daemon-nmzvl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 14:09:14 crc kubenswrapper[4833]: I0217 14:09:14.244678 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" podUID="f4a1ca83-1919-4f9c-82de-c849cbd50e70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 14:09:20 crc kubenswrapper[4833]: I0217 14:09:20.913892 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-665f5_64dfe4ce-e2b1-4005-88af-c8ad6b10f4aa/nmstate-console-plugin/0.log" Feb 17 14:09:21 crc kubenswrapper[4833]: I0217 14:09:21.075647 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-dz6xg_575ecda8-5096-4a4b-8bea-7163a6233bc2/nmstate-handler/0.log" Feb 17 14:09:21 crc kubenswrapper[4833]: I0217 14:09:21.195210 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-jlgbt_e60895e0-eccc-4e5c-adbe-c2286162959e/kube-rbac-proxy/0.log" Feb 17 14:09:21 crc kubenswrapper[4833]: I0217 14:09:21.248270 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-jlgbt_e60895e0-eccc-4e5c-adbe-c2286162959e/nmstate-metrics/0.log" Feb 17 14:09:21 crc kubenswrapper[4833]: I0217 14:09:21.348407 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-llrpt_e4203154-718b-47fd-8727-178e1e2caeb7/nmstate-operator/0.log" Feb 17 14:09:21 crc kubenswrapper[4833]: I0217 14:09:21.434196 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-5cfll_a9cf2a9a-b92f-41f1-a73f-f878ecd4ec66/nmstate-webhook/0.log" Feb 17 14:09:37 crc kubenswrapper[4833]: I0217 14:09:37.666087 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-g9798_305d6a8c-4999-4fbb-a005-e115897f16c8/prometheus-operator/0.log" Feb 17 14:09:37 crc kubenswrapper[4833]: I0217 14:09:37.891829 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5c64c46959-7xhsf_05eabfc9-be7f-4663-8c09-7662798751cb/prometheus-operator-admission-webhook/0.log" Feb 17 14:09:37 crc kubenswrapper[4833]: I0217 14:09:37.927313 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5c64c46959-jrtt8_9dedda0d-335a-43f4-b199-e9da78d5b37b/prometheus-operator-admission-webhook/0.log" Feb 17 14:09:38 crc kubenswrapper[4833]: I0217 14:09:38.078144 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-hwr99_153717ff-fc08-498f-a4ee-bf06ef5866ab/operator/0.log" Feb 17 14:09:38 crc kubenswrapper[4833]: I0217 14:09:38.150057 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-ckvt2_fd47124f-cfc1-4a57-820c-b54cd2e4d113/perses-operator/0.log" Feb 17 14:09:44 crc kubenswrapper[4833]: I0217 14:09:44.244021 4833 patch_prober.go:28] interesting pod/machine-config-daemon-nmzvl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 14:09:44 crc kubenswrapper[4833]: I0217 14:09:44.244740 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" podUID="f4a1ca83-1919-4f9c-82de-c849cbd50e70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 14:09:44 crc kubenswrapper[4833]: I0217 14:09:44.244792 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" Feb 17 14:09:44 crc kubenswrapper[4833]: I0217 14:09:44.245625 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e53af8c62aa13ae97cf8832a68e96b811fc441a187026ab07c19f017c0e40d9d"} pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 14:09:44 crc kubenswrapper[4833]: I0217 14:09:44.245693 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" podUID="f4a1ca83-1919-4f9c-82de-c849cbd50e70" containerName="machine-config-daemon" containerID="cri-o://e53af8c62aa13ae97cf8832a68e96b811fc441a187026ab07c19f017c0e40d9d" gracePeriod=600 Feb 17 14:09:44 crc kubenswrapper[4833]: E0217 14:09:44.369551 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nmzvl_openshift-machine-config-operator(f4a1ca83-1919-4f9c-82de-c849cbd50e70)\"" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" podUID="f4a1ca83-1919-4f9c-82de-c849cbd50e70" Feb 17 14:09:45 crc kubenswrapper[4833]: I0217 14:09:45.344786 4833 generic.go:334] "Generic (PLEG): container finished" podID="f4a1ca83-1919-4f9c-82de-c849cbd50e70" containerID="e53af8c62aa13ae97cf8832a68e96b811fc441a187026ab07c19f017c0e40d9d" exitCode=0 Feb 17 14:09:45 crc kubenswrapper[4833]: I0217 14:09:45.344867 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" event={"ID":"f4a1ca83-1919-4f9c-82de-c849cbd50e70","Type":"ContainerDied","Data":"e53af8c62aa13ae97cf8832a68e96b811fc441a187026ab07c19f017c0e40d9d"} Feb 17 14:09:45 crc kubenswrapper[4833]: I0217 14:09:45.345179 4833 scope.go:117] "RemoveContainer" containerID="8c37cbc628a91e3ba5634ce1d9a94ebdc5f36667fc0a41e44693193f64171282" Feb 17 14:09:45 crc kubenswrapper[4833]: I0217 14:09:45.345657 4833 scope.go:117] "RemoveContainer" containerID="e53af8c62aa13ae97cf8832a68e96b811fc441a187026ab07c19f017c0e40d9d" Feb 17 14:09:45 crc kubenswrapper[4833]: E0217 14:09:45.345872 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nmzvl_openshift-machine-config-operator(f4a1ca83-1919-4f9c-82de-c849cbd50e70)\"" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" podUID="f4a1ca83-1919-4f9c-82de-c849cbd50e70" Feb 17 14:09:54 crc kubenswrapper[4833]: I0217 14:09:54.348503 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-clwcr_8e4e92de-aa2d-4d30-82b2-703a3cf2d7b6/controller/0.log" Feb 17 14:09:54 crc kubenswrapper[4833]: I0217 14:09:54.379823 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-clwcr_8e4e92de-aa2d-4d30-82b2-703a3cf2d7b6/kube-rbac-proxy/0.log" Feb 17 14:09:54 crc kubenswrapper[4833]: I0217 14:09:54.570894 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ls8rs_0485a980-f2be-4c1b-83b4-3449e5794d2c/cp-frr-files/0.log" Feb 17 14:09:54 crc kubenswrapper[4833]: I0217 14:09:54.718765 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ls8rs_0485a980-f2be-4c1b-83b4-3449e5794d2c/cp-frr-files/0.log" Feb 17 14:09:54 crc kubenswrapper[4833]: I0217 14:09:54.740353 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ls8rs_0485a980-f2be-4c1b-83b4-3449e5794d2c/cp-metrics/0.log" Feb 17 14:09:54 crc kubenswrapper[4833]: I0217 14:09:54.770615 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ls8rs_0485a980-f2be-4c1b-83b4-3449e5794d2c/cp-reloader/0.log" Feb 17 14:09:54 crc kubenswrapper[4833]: I0217 14:09:54.796785 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ls8rs_0485a980-f2be-4c1b-83b4-3449e5794d2c/cp-reloader/0.log" Feb 17 14:09:54 crc kubenswrapper[4833]: I0217 14:09:54.991554 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ls8rs_0485a980-f2be-4c1b-83b4-3449e5794d2c/cp-reloader/0.log" Feb 17 14:09:55 crc kubenswrapper[4833]: I0217 14:09:55.043598 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ls8rs_0485a980-f2be-4c1b-83b4-3449e5794d2c/cp-metrics/0.log" Feb 17 14:09:55 crc kubenswrapper[4833]: I0217 14:09:55.060631 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ls8rs_0485a980-f2be-4c1b-83b4-3449e5794d2c/cp-metrics/0.log" Feb 17 14:09:55 crc kubenswrapper[4833]: I0217 14:09:55.072795 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ls8rs_0485a980-f2be-4c1b-83b4-3449e5794d2c/cp-frr-files/0.log" Feb 17 14:09:55 crc kubenswrapper[4833]: I0217 14:09:55.194287 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ls8rs_0485a980-f2be-4c1b-83b4-3449e5794d2c/cp-frr-files/0.log" Feb 17 14:09:55 crc kubenswrapper[4833]: I0217 14:09:55.226742 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ls8rs_0485a980-f2be-4c1b-83b4-3449e5794d2c/cp-reloader/0.log" Feb 17 14:09:55 crc kubenswrapper[4833]: I0217 14:09:55.277948 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ls8rs_0485a980-f2be-4c1b-83b4-3449e5794d2c/controller/0.log" Feb 17 14:09:55 crc kubenswrapper[4833]: I0217 14:09:55.345117 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ls8rs_0485a980-f2be-4c1b-83b4-3449e5794d2c/cp-metrics/0.log" Feb 17 14:09:55 crc kubenswrapper[4833]: I0217 14:09:55.482586 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ls8rs_0485a980-f2be-4c1b-83b4-3449e5794d2c/frr-metrics/0.log" Feb 17 14:09:55 crc kubenswrapper[4833]: I0217 14:09:55.637278 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ls8rs_0485a980-f2be-4c1b-83b4-3449e5794d2c/frr/0.log" Feb 17 14:09:55 crc kubenswrapper[4833]: I0217 14:09:55.643669 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ls8rs_0485a980-f2be-4c1b-83b4-3449e5794d2c/kube-rbac-proxy/0.log" Feb 17 14:09:55 crc kubenswrapper[4833]: I0217 14:09:55.682789 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ls8rs_0485a980-f2be-4c1b-83b4-3449e5794d2c/kube-rbac-proxy-frr/0.log" Feb 17 14:09:55 crc kubenswrapper[4833]: I0217 14:09:55.767066 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ls8rs_0485a980-f2be-4c1b-83b4-3449e5794d2c/reloader/0.log" Feb 17 14:09:55 crc kubenswrapper[4833]: I0217 14:09:55.910490 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-8wwr4_034a2f99-1c32-4d53-8018-739d262fdc4c/frr-k8s-webhook-server/0.log" Feb 17 14:09:56 crc kubenswrapper[4833]: I0217 14:09:56.059475 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-55cdbb5474-tkm6p_43cee1ba-407b-420e-985d-3cc8806c0092/manager/0.log" Feb 17 14:09:56 crc kubenswrapper[4833]: I0217 14:09:56.159927 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6cbf77cb9b-nv8ks_266df56f-825e-4e4e-89e2-90deb0c2fa79/webhook-server/0.log" Feb 17 14:09:56 crc kubenswrapper[4833]: I0217 14:09:56.317755 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-slf66_2b33d487-2e39-4f67-b34c-afc3b7cca769/kube-rbac-proxy/0.log" Feb 17 14:09:56 crc kubenswrapper[4833]: I0217 14:09:56.438855 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-slf66_2b33d487-2e39-4f67-b34c-afc3b7cca769/speaker/0.log" Feb 17 14:10:01 crc kubenswrapper[4833]: I0217 14:10:01.045886 4833 scope.go:117] "RemoveContainer" containerID="e53af8c62aa13ae97cf8832a68e96b811fc441a187026ab07c19f017c0e40d9d" Feb 17 14:10:01 crc kubenswrapper[4833]: E0217 14:10:01.046366 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nmzvl_openshift-machine-config-operator(f4a1ca83-1919-4f9c-82de-c849cbd50e70)\"" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" podUID="f4a1ca83-1919-4f9c-82de-c849cbd50e70" Feb 17 14:10:10 crc kubenswrapper[4833]: I0217 14:10:10.386709 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56skhf_29c10323-62d4-4656-a511-c3ec32027993/util/0.log" Feb 17 14:10:10 crc kubenswrapper[4833]: I0217 14:10:10.634806 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56skhf_29c10323-62d4-4656-a511-c3ec32027993/util/0.log" Feb 17 14:10:10 crc kubenswrapper[4833]: I0217 14:10:10.638444 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56skhf_29c10323-62d4-4656-a511-c3ec32027993/pull/0.log" Feb 17 14:10:10 crc kubenswrapper[4833]: I0217 14:10:10.687793 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56skhf_29c10323-62d4-4656-a511-c3ec32027993/pull/0.log" Feb 17 14:10:10 crc kubenswrapper[4833]: I0217 14:10:10.922437 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56skhf_29c10323-62d4-4656-a511-c3ec32027993/util/0.log" Feb 17 14:10:10 crc kubenswrapper[4833]: I0217 14:10:10.931026 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56skhf_29c10323-62d4-4656-a511-c3ec32027993/extract/0.log" Feb 17 14:10:10 crc kubenswrapper[4833]: I0217 14:10:10.939129 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56skhf_29c10323-62d4-4656-a511-c3ec32027993/pull/0.log" Feb 17 14:10:11 crc kubenswrapper[4833]: I0217 14:10:11.109424 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xr92c_9a9d49dc-0f2e-43d3-8d5f-9cf11ef29520/util/0.log" Feb 17 14:10:11 crc kubenswrapper[4833]: I0217 14:10:11.306503 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xr92c_9a9d49dc-0f2e-43d3-8d5f-9cf11ef29520/pull/0.log" Feb 17 14:10:11 crc kubenswrapper[4833]: I0217 14:10:11.312079 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xr92c_9a9d49dc-0f2e-43d3-8d5f-9cf11ef29520/util/0.log" Feb 17 14:10:11 crc kubenswrapper[4833]: I0217 14:10:11.356650 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xr92c_9a9d49dc-0f2e-43d3-8d5f-9cf11ef29520/pull/0.log" Feb 17 14:10:11 crc kubenswrapper[4833]: I0217 14:10:11.488637 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xr92c_9a9d49dc-0f2e-43d3-8d5f-9cf11ef29520/util/0.log" Feb 17 14:10:11 crc kubenswrapper[4833]: I0217 14:10:11.538657 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xr92c_9a9d49dc-0f2e-43d3-8d5f-9cf11ef29520/pull/0.log" Feb 17 14:10:11 crc kubenswrapper[4833]: I0217 14:10:11.575476 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xr92c_9a9d49dc-0f2e-43d3-8d5f-9cf11ef29520/extract/0.log" Feb 17 14:10:11 crc kubenswrapper[4833]: I0217 14:10:11.757423 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kqzvb_5dd72ffb-4456-49bc-9dae-ba715f5a12fa/util/0.log" Feb 17 14:10:11 crc kubenswrapper[4833]: I0217 14:10:11.899873 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kqzvb_5dd72ffb-4456-49bc-9dae-ba715f5a12fa/util/0.log" Feb 17 14:10:11 crc kubenswrapper[4833]: I0217 14:10:11.950535 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kqzvb_5dd72ffb-4456-49bc-9dae-ba715f5a12fa/pull/0.log" Feb 17 14:10:11 crc kubenswrapper[4833]: I0217 14:10:11.958162 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kqzvb_5dd72ffb-4456-49bc-9dae-ba715f5a12fa/pull/0.log" Feb 17 14:10:12 crc kubenswrapper[4833]: I0217 14:10:12.041807 4833 scope.go:117] "RemoveContainer" containerID="e53af8c62aa13ae97cf8832a68e96b811fc441a187026ab07c19f017c0e40d9d" Feb 17 14:10:12 crc kubenswrapper[4833]: E0217 14:10:12.042113 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nmzvl_openshift-machine-config-operator(f4a1ca83-1919-4f9c-82de-c849cbd50e70)\"" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" podUID="f4a1ca83-1919-4f9c-82de-c849cbd50e70" Feb 17 14:10:12 crc kubenswrapper[4833]: I0217 14:10:12.226562 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kqzvb_5dd72ffb-4456-49bc-9dae-ba715f5a12fa/pull/0.log" Feb 17 14:10:12 crc kubenswrapper[4833]: I0217 14:10:12.239906 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kqzvb_5dd72ffb-4456-49bc-9dae-ba715f5a12fa/util/0.log" Feb 17 14:10:12 crc kubenswrapper[4833]: I0217 14:10:12.325126 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kqzvb_5dd72ffb-4456-49bc-9dae-ba715f5a12fa/extract/0.log" Feb 17 14:10:12 crc kubenswrapper[4833]: I0217 14:10:12.425627 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rh8sb_74000cfa-d982-443b-ad50-a470fafc3b5a/extract-utilities/0.log" Feb 17 14:10:12 crc kubenswrapper[4833]: I0217 14:10:12.677488 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rh8sb_74000cfa-d982-443b-ad50-a470fafc3b5a/extract-content/0.log" Feb 17 14:10:12 crc kubenswrapper[4833]: I0217 14:10:12.689421 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rh8sb_74000cfa-d982-443b-ad50-a470fafc3b5a/extract-content/0.log" Feb 17 14:10:12 crc kubenswrapper[4833]: I0217 14:10:12.714015 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rh8sb_74000cfa-d982-443b-ad50-a470fafc3b5a/extract-utilities/0.log" Feb 17 14:10:12 crc kubenswrapper[4833]: I0217 14:10:12.884668 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rh8sb_74000cfa-d982-443b-ad50-a470fafc3b5a/extract-content/0.log" Feb 17 14:10:12 crc kubenswrapper[4833]: I0217 14:10:12.885683 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rh8sb_74000cfa-d982-443b-ad50-a470fafc3b5a/extract-utilities/0.log" Feb 17 14:10:13 crc kubenswrapper[4833]: I0217 14:10:13.186355 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lvqpz_66be447f-7f0e-411d-870d-cf62f09511a3/extract-utilities/0.log" Feb 17 14:10:13 crc kubenswrapper[4833]: I0217 14:10:13.322949 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rh8sb_74000cfa-d982-443b-ad50-a470fafc3b5a/registry-server/0.log" Feb 17 14:10:13 crc kubenswrapper[4833]: I0217 14:10:13.592440 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lvqpz_66be447f-7f0e-411d-870d-cf62f09511a3/extract-utilities/0.log" Feb 17 14:10:13 crc kubenswrapper[4833]: I0217 14:10:13.617578 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lvqpz_66be447f-7f0e-411d-870d-cf62f09511a3/extract-content/0.log" Feb 17 14:10:13 crc kubenswrapper[4833]: I0217 14:10:13.645119 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lvqpz_66be447f-7f0e-411d-870d-cf62f09511a3/extract-content/0.log" Feb 17 14:10:13 crc kubenswrapper[4833]: I0217 14:10:13.852877 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lvqpz_66be447f-7f0e-411d-870d-cf62f09511a3/extract-content/0.log" Feb 17 14:10:13 crc kubenswrapper[4833]: I0217 14:10:13.854052 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lvqpz_66be447f-7f0e-411d-870d-cf62f09511a3/extract-utilities/0.log" Feb 17 14:10:14 crc kubenswrapper[4833]: I0217 14:10:14.119090 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4fqhq_19698a8a-e73a-4586-97f5-27e2e0bf0dff/util/0.log" Feb 17 14:10:14 crc kubenswrapper[4833]: I0217 14:10:14.221589 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lvqpz_66be447f-7f0e-411d-870d-cf62f09511a3/registry-server/0.log" Feb 17 14:10:14 crc kubenswrapper[4833]: I0217 14:10:14.309364 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4fqhq_19698a8a-e73a-4586-97f5-27e2e0bf0dff/util/0.log" Feb 17 14:10:14 crc kubenswrapper[4833]: I0217 14:10:14.309757 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4fqhq_19698a8a-e73a-4586-97f5-27e2e0bf0dff/pull/0.log" Feb 17 14:10:14 crc kubenswrapper[4833]: I0217 14:10:14.453575 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4fqhq_19698a8a-e73a-4586-97f5-27e2e0bf0dff/pull/0.log" Feb 17 14:10:14 crc kubenswrapper[4833]: I0217 14:10:14.632415 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4fqhq_19698a8a-e73a-4586-97f5-27e2e0bf0dff/extract/0.log" Feb 17 14:10:14 crc kubenswrapper[4833]: I0217 14:10:14.653719 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4fqhq_19698a8a-e73a-4586-97f5-27e2e0bf0dff/pull/0.log" Feb 17 14:10:14 crc kubenswrapper[4833]: I0217 14:10:14.780276 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4fqhq_19698a8a-e73a-4586-97f5-27e2e0bf0dff/util/0.log" Feb 17 14:10:14 crc kubenswrapper[4833]: I0217 14:10:14.850119 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-d8g96_3a8790e9-2fd7-49cb-a382-d3671a8019a7/marketplace-operator/0.log" Feb 17 14:10:15 crc kubenswrapper[4833]: I0217 14:10:15.017354 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m94q7_75ec30f8-b6b9-4140-a1ac-f4a421ebe3ce/extract-utilities/0.log" Feb 17 14:10:15 crc kubenswrapper[4833]: I0217 14:10:15.164665 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m94q7_75ec30f8-b6b9-4140-a1ac-f4a421ebe3ce/extract-utilities/0.log" Feb 17 14:10:15 crc kubenswrapper[4833]: I0217 14:10:15.224461 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m94q7_75ec30f8-b6b9-4140-a1ac-f4a421ebe3ce/extract-content/0.log" Feb 17 14:10:15 crc kubenswrapper[4833]: I0217 14:10:15.228484 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m94q7_75ec30f8-b6b9-4140-a1ac-f4a421ebe3ce/extract-content/0.log" Feb 17 14:10:15 crc kubenswrapper[4833]: I0217 14:10:15.568530 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m94q7_75ec30f8-b6b9-4140-a1ac-f4a421ebe3ce/extract-utilities/0.log" Feb 17 14:10:15 crc kubenswrapper[4833]: I0217 14:10:15.654055 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m94q7_75ec30f8-b6b9-4140-a1ac-f4a421ebe3ce/registry-server/0.log" Feb 17 14:10:15 crc kubenswrapper[4833]: I0217 14:10:15.713702 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m94q7_75ec30f8-b6b9-4140-a1ac-f4a421ebe3ce/extract-content/0.log" Feb 17 14:10:15 crc kubenswrapper[4833]: I0217 14:10:15.715529 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xbcwj_2b38215d-3854-4f9b-8310-974daedbd313/extract-utilities/0.log" Feb 17 14:10:15 crc kubenswrapper[4833]: I0217 14:10:15.938675 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xbcwj_2b38215d-3854-4f9b-8310-974daedbd313/extract-utilities/0.log" Feb 17 14:10:15 crc kubenswrapper[4833]: I0217 14:10:15.957724 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xbcwj_2b38215d-3854-4f9b-8310-974daedbd313/extract-content/0.log" Feb 17 14:10:15 crc kubenswrapper[4833]: I0217 14:10:15.962867 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xbcwj_2b38215d-3854-4f9b-8310-974daedbd313/extract-content/0.log" Feb 17 14:10:16 crc kubenswrapper[4833]: I0217 14:10:16.174120 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xbcwj_2b38215d-3854-4f9b-8310-974daedbd313/extract-content/0.log" Feb 17 14:10:16 crc kubenswrapper[4833]: I0217 14:10:16.196300 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xbcwj_2b38215d-3854-4f9b-8310-974daedbd313/extract-utilities/0.log" Feb 17 14:10:16 crc kubenswrapper[4833]: I0217 14:10:16.471238 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xbcwj_2b38215d-3854-4f9b-8310-974daedbd313/registry-server/0.log" Feb 17 14:10:27 crc kubenswrapper[4833]: I0217 14:10:27.042439 4833 scope.go:117] "RemoveContainer" containerID="e53af8c62aa13ae97cf8832a68e96b811fc441a187026ab07c19f017c0e40d9d" Feb 17 14:10:27 crc kubenswrapper[4833]: E0217 14:10:27.043202 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nmzvl_openshift-machine-config-operator(f4a1ca83-1919-4f9c-82de-c849cbd50e70)\"" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" podUID="f4a1ca83-1919-4f9c-82de-c849cbd50e70" Feb 17 14:10:31 crc kubenswrapper[4833]: I0217 14:10:31.138363 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5c64c46959-7xhsf_05eabfc9-be7f-4663-8c09-7662798751cb/prometheus-operator-admission-webhook/0.log" Feb 17 14:10:31 crc kubenswrapper[4833]: I0217 14:10:31.161312 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5c64c46959-jrtt8_9dedda0d-335a-43f4-b199-e9da78d5b37b/prometheus-operator-admission-webhook/0.log" Feb 17 14:10:31 crc kubenswrapper[4833]: I0217 14:10:31.245751 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-g9798_305d6a8c-4999-4fbb-a005-e115897f16c8/prometheus-operator/0.log" Feb 17 14:10:31 crc kubenswrapper[4833]: I0217 14:10:31.409418 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-hwr99_153717ff-fc08-498f-a4ee-bf06ef5866ab/operator/0.log" Feb 17 14:10:31 crc kubenswrapper[4833]: I0217 14:10:31.430628 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-ckvt2_fd47124f-cfc1-4a57-820c-b54cd2e4d113/perses-operator/0.log" Feb 17 14:10:38 crc kubenswrapper[4833]: I0217 14:10:38.041415 4833 scope.go:117] "RemoveContainer" containerID="e53af8c62aa13ae97cf8832a68e96b811fc441a187026ab07c19f017c0e40d9d" Feb 17 14:10:38 crc kubenswrapper[4833]: E0217 14:10:38.041979 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nmzvl_openshift-machine-config-operator(f4a1ca83-1919-4f9c-82de-c849cbd50e70)\"" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" podUID="f4a1ca83-1919-4f9c-82de-c849cbd50e70" Feb 17 14:10:53 crc kubenswrapper[4833]: I0217 14:10:53.041704 4833 scope.go:117] "RemoveContainer" containerID="e53af8c62aa13ae97cf8832a68e96b811fc441a187026ab07c19f017c0e40d9d" Feb 17 14:10:53 crc kubenswrapper[4833]: E0217 14:10:53.043064 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nmzvl_openshift-machine-config-operator(f4a1ca83-1919-4f9c-82de-c849cbd50e70)\"" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" podUID="f4a1ca83-1919-4f9c-82de-c849cbd50e70" Feb 17 14:11:08 crc kubenswrapper[4833]: I0217 14:11:08.041657 4833 scope.go:117] "RemoveContainer" containerID="e53af8c62aa13ae97cf8832a68e96b811fc441a187026ab07c19f017c0e40d9d" Feb 17 14:11:08 crc kubenswrapper[4833]: E0217 14:11:08.042422 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nmzvl_openshift-machine-config-operator(f4a1ca83-1919-4f9c-82de-c849cbd50e70)\"" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" podUID="f4a1ca83-1919-4f9c-82de-c849cbd50e70" Feb 17 14:11:08 crc kubenswrapper[4833]: I0217 14:11:08.665826 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wlp92"] Feb 17 14:11:08 crc kubenswrapper[4833]: E0217 14:11:08.666148 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d23b440e-abdf-4558-92cc-204bdab1dadb" containerName="extract-utilities" Feb 17 14:11:08 crc kubenswrapper[4833]: I0217 14:11:08.666162 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="d23b440e-abdf-4558-92cc-204bdab1dadb" containerName="extract-utilities" Feb 17 14:11:08 crc kubenswrapper[4833]: E0217 14:11:08.666172 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d23b440e-abdf-4558-92cc-204bdab1dadb" containerName="registry-server" Feb 17 14:11:08 crc kubenswrapper[4833]: I0217 14:11:08.666178 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="d23b440e-abdf-4558-92cc-204bdab1dadb" containerName="registry-server" Feb 17 14:11:08 crc kubenswrapper[4833]: E0217 14:11:08.666193 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d23b440e-abdf-4558-92cc-204bdab1dadb" containerName="extract-content" Feb 17 14:11:08 crc kubenswrapper[4833]: I0217 14:11:08.666198 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="d23b440e-abdf-4558-92cc-204bdab1dadb" containerName="extract-content" Feb 17 14:11:08 crc kubenswrapper[4833]: I0217 14:11:08.666344 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="d23b440e-abdf-4558-92cc-204bdab1dadb" containerName="registry-server" Feb 17 14:11:08 crc kubenswrapper[4833]: I0217 14:11:08.667298 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wlp92" Feb 17 14:11:08 crc kubenswrapper[4833]: I0217 14:11:08.678469 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wlp92"] Feb 17 14:11:08 crc kubenswrapper[4833]: I0217 14:11:08.718884 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5bb307b-97df-4a9c-8ce7-f9957d70a7b4-catalog-content\") pod \"redhat-marketplace-wlp92\" (UID: \"b5bb307b-97df-4a9c-8ce7-f9957d70a7b4\") " pod="openshift-marketplace/redhat-marketplace-wlp92" Feb 17 14:11:08 crc kubenswrapper[4833]: I0217 14:11:08.718946 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5bb307b-97df-4a9c-8ce7-f9957d70a7b4-utilities\") pod \"redhat-marketplace-wlp92\" (UID: \"b5bb307b-97df-4a9c-8ce7-f9957d70a7b4\") " pod="openshift-marketplace/redhat-marketplace-wlp92" Feb 17 14:11:08 crc kubenswrapper[4833]: I0217 14:11:08.719188 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5s77\" (UniqueName: \"kubernetes.io/projected/b5bb307b-97df-4a9c-8ce7-f9957d70a7b4-kube-api-access-f5s77\") pod \"redhat-marketplace-wlp92\" (UID: \"b5bb307b-97df-4a9c-8ce7-f9957d70a7b4\") " pod="openshift-marketplace/redhat-marketplace-wlp92" Feb 17 14:11:08 crc kubenswrapper[4833]: I0217 14:11:08.820231 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5bb307b-97df-4a9c-8ce7-f9957d70a7b4-catalog-content\") pod \"redhat-marketplace-wlp92\" (UID: \"b5bb307b-97df-4a9c-8ce7-f9957d70a7b4\") " pod="openshift-marketplace/redhat-marketplace-wlp92" Feb 17 14:11:08 crc kubenswrapper[4833]: I0217 14:11:08.820277 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5bb307b-97df-4a9c-8ce7-f9957d70a7b4-utilities\") pod \"redhat-marketplace-wlp92\" (UID: \"b5bb307b-97df-4a9c-8ce7-f9957d70a7b4\") " pod="openshift-marketplace/redhat-marketplace-wlp92" Feb 17 14:11:08 crc kubenswrapper[4833]: I0217 14:11:08.820348 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5s77\" (UniqueName: \"kubernetes.io/projected/b5bb307b-97df-4a9c-8ce7-f9957d70a7b4-kube-api-access-f5s77\") pod \"redhat-marketplace-wlp92\" (UID: \"b5bb307b-97df-4a9c-8ce7-f9957d70a7b4\") " pod="openshift-marketplace/redhat-marketplace-wlp92" Feb 17 14:11:08 crc kubenswrapper[4833]: I0217 14:11:08.820765 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5bb307b-97df-4a9c-8ce7-f9957d70a7b4-catalog-content\") pod \"redhat-marketplace-wlp92\" (UID: \"b5bb307b-97df-4a9c-8ce7-f9957d70a7b4\") " pod="openshift-marketplace/redhat-marketplace-wlp92" Feb 17 14:11:08 crc kubenswrapper[4833]: I0217 14:11:08.820829 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5bb307b-97df-4a9c-8ce7-f9957d70a7b4-utilities\") pod \"redhat-marketplace-wlp92\" (UID: \"b5bb307b-97df-4a9c-8ce7-f9957d70a7b4\") " pod="openshift-marketplace/redhat-marketplace-wlp92" Feb 17 14:11:08 crc kubenswrapper[4833]: I0217 14:11:08.845442 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5s77\" (UniqueName: \"kubernetes.io/projected/b5bb307b-97df-4a9c-8ce7-f9957d70a7b4-kube-api-access-f5s77\") pod \"redhat-marketplace-wlp92\" (UID: \"b5bb307b-97df-4a9c-8ce7-f9957d70a7b4\") " pod="openshift-marketplace/redhat-marketplace-wlp92" Feb 17 14:11:09 crc kubenswrapper[4833]: I0217 14:11:09.030335 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wlp92" Feb 17 14:11:09 crc kubenswrapper[4833]: I0217 14:11:09.293257 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wlp92"] Feb 17 14:11:09 crc kubenswrapper[4833]: I0217 14:11:09.942353 4833 generic.go:334] "Generic (PLEG): container finished" podID="b5bb307b-97df-4a9c-8ce7-f9957d70a7b4" containerID="f3244d3f1ea18a449339d2fe205f97e61a5e4df844f205e6b5cbe1755a4c919a" exitCode=0 Feb 17 14:11:09 crc kubenswrapper[4833]: I0217 14:11:09.942636 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wlp92" event={"ID":"b5bb307b-97df-4a9c-8ce7-f9957d70a7b4","Type":"ContainerDied","Data":"f3244d3f1ea18a449339d2fe205f97e61a5e4df844f205e6b5cbe1755a4c919a"} Feb 17 14:11:09 crc kubenswrapper[4833]: I0217 14:11:09.942661 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wlp92" event={"ID":"b5bb307b-97df-4a9c-8ce7-f9957d70a7b4","Type":"ContainerStarted","Data":"7279b40dc27172935e549206e9feef3635d54cdd1f5be9877d03afd6437e589e"} Feb 17 14:11:14 crc kubenswrapper[4833]: I0217 14:11:14.976329 4833 generic.go:334] "Generic (PLEG): container finished" podID="b5bb307b-97df-4a9c-8ce7-f9957d70a7b4" containerID="7087e2b262ce9c837b4e1e5cdffcb1de5d5228513fb72addb0aa4ed8afad761c" exitCode=0 Feb 17 14:11:14 crc kubenswrapper[4833]: I0217 14:11:14.976441 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wlp92" event={"ID":"b5bb307b-97df-4a9c-8ce7-f9957d70a7b4","Type":"ContainerDied","Data":"7087e2b262ce9c837b4e1e5cdffcb1de5d5228513fb72addb0aa4ed8afad761c"} Feb 17 14:11:15 crc kubenswrapper[4833]: I0217 14:11:15.986025 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wlp92" event={"ID":"b5bb307b-97df-4a9c-8ce7-f9957d70a7b4","Type":"ContainerStarted","Data":"51d7960db31e616407df18eb8526974ca753d03d9b0727c58e746cc14e288006"} Feb 17 14:11:16 crc kubenswrapper[4833]: I0217 14:11:16.011736 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wlp92" podStartSLOduration=2.165531922 podStartE2EDuration="8.011715254s" podCreationTimestamp="2026-02-17 14:11:08 +0000 UTC" firstStartedPulling="2026-02-17 14:11:09.943771315 +0000 UTC m=+1559.578870748" lastFinishedPulling="2026-02-17 14:11:15.789954647 +0000 UTC m=+1565.425054080" observedRunningTime="2026-02-17 14:11:16.006603828 +0000 UTC m=+1565.641703271" watchObservedRunningTime="2026-02-17 14:11:16.011715254 +0000 UTC m=+1565.646814687" Feb 17 14:11:19 crc kubenswrapper[4833]: I0217 14:11:19.031401 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wlp92" Feb 17 14:11:19 crc kubenswrapper[4833]: I0217 14:11:19.031461 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wlp92" Feb 17 14:11:19 crc kubenswrapper[4833]: I0217 14:11:19.071552 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wlp92" Feb 17 14:11:20 crc kubenswrapper[4833]: I0217 14:11:20.062732 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wlp92" Feb 17 14:11:21 crc kubenswrapper[4833]: I0217 14:11:21.047590 4833 scope.go:117] "RemoveContainer" containerID="e53af8c62aa13ae97cf8832a68e96b811fc441a187026ab07c19f017c0e40d9d" Feb 17 14:11:21 crc kubenswrapper[4833]: E0217 14:11:21.048090 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nmzvl_openshift-machine-config-operator(f4a1ca83-1919-4f9c-82de-c849cbd50e70)\"" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" podUID="f4a1ca83-1919-4f9c-82de-c849cbd50e70" Feb 17 14:11:21 crc kubenswrapper[4833]: I0217 14:11:21.449915 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wlp92"] Feb 17 14:11:22 crc kubenswrapper[4833]: I0217 14:11:22.022512 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wlp92" podUID="b5bb307b-97df-4a9c-8ce7-f9957d70a7b4" containerName="registry-server" containerID="cri-o://51d7960db31e616407df18eb8526974ca753d03d9b0727c58e746cc14e288006" gracePeriod=2 Feb 17 14:11:22 crc kubenswrapper[4833]: I0217 14:11:22.476497 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wlp92" Feb 17 14:11:22 crc kubenswrapper[4833]: I0217 14:11:22.611308 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5bb307b-97df-4a9c-8ce7-f9957d70a7b4-utilities\") pod \"b5bb307b-97df-4a9c-8ce7-f9957d70a7b4\" (UID: \"b5bb307b-97df-4a9c-8ce7-f9957d70a7b4\") " Feb 17 14:11:22 crc kubenswrapper[4833]: I0217 14:11:22.612181 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5bb307b-97df-4a9c-8ce7-f9957d70a7b4-utilities" (OuterVolumeSpecName: "utilities") pod "b5bb307b-97df-4a9c-8ce7-f9957d70a7b4" (UID: "b5bb307b-97df-4a9c-8ce7-f9957d70a7b4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:11:22 crc kubenswrapper[4833]: I0217 14:11:22.612316 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5s77\" (UniqueName: \"kubernetes.io/projected/b5bb307b-97df-4a9c-8ce7-f9957d70a7b4-kube-api-access-f5s77\") pod \"b5bb307b-97df-4a9c-8ce7-f9957d70a7b4\" (UID: \"b5bb307b-97df-4a9c-8ce7-f9957d70a7b4\") " Feb 17 14:11:22 crc kubenswrapper[4833]: I0217 14:11:22.613720 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5bb307b-97df-4a9c-8ce7-f9957d70a7b4-catalog-content\") pod \"b5bb307b-97df-4a9c-8ce7-f9957d70a7b4\" (UID: \"b5bb307b-97df-4a9c-8ce7-f9957d70a7b4\") " Feb 17 14:11:22 crc kubenswrapper[4833]: I0217 14:11:22.614004 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5bb307b-97df-4a9c-8ce7-f9957d70a7b4-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 14:11:22 crc kubenswrapper[4833]: I0217 14:11:22.633599 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5bb307b-97df-4a9c-8ce7-f9957d70a7b4-kube-api-access-f5s77" (OuterVolumeSpecName: "kube-api-access-f5s77") pod "b5bb307b-97df-4a9c-8ce7-f9957d70a7b4" (UID: "b5bb307b-97df-4a9c-8ce7-f9957d70a7b4"). InnerVolumeSpecName "kube-api-access-f5s77". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:11:22 crc kubenswrapper[4833]: I0217 14:11:22.638500 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5bb307b-97df-4a9c-8ce7-f9957d70a7b4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b5bb307b-97df-4a9c-8ce7-f9957d70a7b4" (UID: "b5bb307b-97df-4a9c-8ce7-f9957d70a7b4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:11:22 crc kubenswrapper[4833]: I0217 14:11:22.714655 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5s77\" (UniqueName: \"kubernetes.io/projected/b5bb307b-97df-4a9c-8ce7-f9957d70a7b4-kube-api-access-f5s77\") on node \"crc\" DevicePath \"\"" Feb 17 14:11:22 crc kubenswrapper[4833]: I0217 14:11:22.714690 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5bb307b-97df-4a9c-8ce7-f9957d70a7b4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 14:11:23 crc kubenswrapper[4833]: I0217 14:11:23.032996 4833 generic.go:334] "Generic (PLEG): container finished" podID="b5bb307b-97df-4a9c-8ce7-f9957d70a7b4" containerID="51d7960db31e616407df18eb8526974ca753d03d9b0727c58e746cc14e288006" exitCode=0 Feb 17 14:11:23 crc kubenswrapper[4833]: I0217 14:11:23.033061 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wlp92" event={"ID":"b5bb307b-97df-4a9c-8ce7-f9957d70a7b4","Type":"ContainerDied","Data":"51d7960db31e616407df18eb8526974ca753d03d9b0727c58e746cc14e288006"} Feb 17 14:11:23 crc kubenswrapper[4833]: I0217 14:11:23.033087 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wlp92" event={"ID":"b5bb307b-97df-4a9c-8ce7-f9957d70a7b4","Type":"ContainerDied","Data":"7279b40dc27172935e549206e9feef3635d54cdd1f5be9877d03afd6437e589e"} Feb 17 14:11:23 crc kubenswrapper[4833]: I0217 14:11:23.033104 4833 scope.go:117] "RemoveContainer" containerID="51d7960db31e616407df18eb8526974ca753d03d9b0727c58e746cc14e288006" Feb 17 14:11:23 crc kubenswrapper[4833]: I0217 14:11:23.033126 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wlp92" Feb 17 14:11:23 crc kubenswrapper[4833]: I0217 14:11:23.063515 4833 scope.go:117] "RemoveContainer" containerID="7087e2b262ce9c837b4e1e5cdffcb1de5d5228513fb72addb0aa4ed8afad761c" Feb 17 14:11:23 crc kubenswrapper[4833]: I0217 14:11:23.064869 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wlp92"] Feb 17 14:11:23 crc kubenswrapper[4833]: I0217 14:11:23.072236 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wlp92"] Feb 17 14:11:23 crc kubenswrapper[4833]: I0217 14:11:23.089946 4833 scope.go:117] "RemoveContainer" containerID="f3244d3f1ea18a449339d2fe205f97e61a5e4df844f205e6b5cbe1755a4c919a" Feb 17 14:11:23 crc kubenswrapper[4833]: I0217 14:11:23.112785 4833 scope.go:117] "RemoveContainer" containerID="51d7960db31e616407df18eb8526974ca753d03d9b0727c58e746cc14e288006" Feb 17 14:11:23 crc kubenswrapper[4833]: E0217 14:11:23.113433 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51d7960db31e616407df18eb8526974ca753d03d9b0727c58e746cc14e288006\": container with ID starting with 51d7960db31e616407df18eb8526974ca753d03d9b0727c58e746cc14e288006 not found: ID does not exist" containerID="51d7960db31e616407df18eb8526974ca753d03d9b0727c58e746cc14e288006" Feb 17 14:11:23 crc kubenswrapper[4833]: I0217 14:11:23.113470 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51d7960db31e616407df18eb8526974ca753d03d9b0727c58e746cc14e288006"} err="failed to get container status \"51d7960db31e616407df18eb8526974ca753d03d9b0727c58e746cc14e288006\": rpc error: code = NotFound desc = could not find container \"51d7960db31e616407df18eb8526974ca753d03d9b0727c58e746cc14e288006\": container with ID starting with 51d7960db31e616407df18eb8526974ca753d03d9b0727c58e746cc14e288006 not found: ID does not exist" Feb 17 14:11:23 crc kubenswrapper[4833]: I0217 14:11:23.113490 4833 scope.go:117] "RemoveContainer" containerID="7087e2b262ce9c837b4e1e5cdffcb1de5d5228513fb72addb0aa4ed8afad761c" Feb 17 14:11:23 crc kubenswrapper[4833]: E0217 14:11:23.113979 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7087e2b262ce9c837b4e1e5cdffcb1de5d5228513fb72addb0aa4ed8afad761c\": container with ID starting with 7087e2b262ce9c837b4e1e5cdffcb1de5d5228513fb72addb0aa4ed8afad761c not found: ID does not exist" containerID="7087e2b262ce9c837b4e1e5cdffcb1de5d5228513fb72addb0aa4ed8afad761c" Feb 17 14:11:23 crc kubenswrapper[4833]: I0217 14:11:23.114210 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7087e2b262ce9c837b4e1e5cdffcb1de5d5228513fb72addb0aa4ed8afad761c"} err="failed to get container status \"7087e2b262ce9c837b4e1e5cdffcb1de5d5228513fb72addb0aa4ed8afad761c\": rpc error: code = NotFound desc = could not find container \"7087e2b262ce9c837b4e1e5cdffcb1de5d5228513fb72addb0aa4ed8afad761c\": container with ID starting with 7087e2b262ce9c837b4e1e5cdffcb1de5d5228513fb72addb0aa4ed8afad761c not found: ID does not exist" Feb 17 14:11:23 crc kubenswrapper[4833]: I0217 14:11:23.114359 4833 scope.go:117] "RemoveContainer" containerID="f3244d3f1ea18a449339d2fe205f97e61a5e4df844f205e6b5cbe1755a4c919a" Feb 17 14:11:23 crc kubenswrapper[4833]: E0217 14:11:23.114930 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3244d3f1ea18a449339d2fe205f97e61a5e4df844f205e6b5cbe1755a4c919a\": container with ID starting with f3244d3f1ea18a449339d2fe205f97e61a5e4df844f205e6b5cbe1755a4c919a not found: ID does not exist" containerID="f3244d3f1ea18a449339d2fe205f97e61a5e4df844f205e6b5cbe1755a4c919a" Feb 17 14:11:23 crc kubenswrapper[4833]: I0217 14:11:23.115079 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3244d3f1ea18a449339d2fe205f97e61a5e4df844f205e6b5cbe1755a4c919a"} err="failed to get container status \"f3244d3f1ea18a449339d2fe205f97e61a5e4df844f205e6b5cbe1755a4c919a\": rpc error: code = NotFound desc = could not find container \"f3244d3f1ea18a449339d2fe205f97e61a5e4df844f205e6b5cbe1755a4c919a\": container with ID starting with f3244d3f1ea18a449339d2fe205f97e61a5e4df844f205e6b5cbe1755a4c919a not found: ID does not exist" Feb 17 14:11:25 crc kubenswrapper[4833]: I0217 14:11:25.051545 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5bb307b-97df-4a9c-8ce7-f9957d70a7b4" path="/var/lib/kubelet/pods/b5bb307b-97df-4a9c-8ce7-f9957d70a7b4/volumes" Feb 17 14:11:35 crc kubenswrapper[4833]: I0217 14:11:35.120973 4833 generic.go:334] "Generic (PLEG): container finished" podID="f461fd42-3f7e-4bfb-8cd3-34a3524e27ad" containerID="038e30feb973382cc3a0fad208909871faf017da9662cbcb8ef478e8958b56a1" exitCode=0 Feb 17 14:11:35 crc kubenswrapper[4833]: I0217 14:11:35.121071 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mtcdf/must-gather-4nb2z" event={"ID":"f461fd42-3f7e-4bfb-8cd3-34a3524e27ad","Type":"ContainerDied","Data":"038e30feb973382cc3a0fad208909871faf017da9662cbcb8ef478e8958b56a1"} Feb 17 14:11:35 crc kubenswrapper[4833]: I0217 14:11:35.121941 4833 scope.go:117] "RemoveContainer" containerID="038e30feb973382cc3a0fad208909871faf017da9662cbcb8ef478e8958b56a1" Feb 17 14:11:35 crc kubenswrapper[4833]: I0217 14:11:35.343533 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-mtcdf_must-gather-4nb2z_f461fd42-3f7e-4bfb-8cd3-34a3524e27ad/gather/0.log" Feb 17 14:11:36 crc kubenswrapper[4833]: I0217 14:11:36.041819 4833 scope.go:117] "RemoveContainer" containerID="e53af8c62aa13ae97cf8832a68e96b811fc441a187026ab07c19f017c0e40d9d" Feb 17 14:11:36 crc kubenswrapper[4833]: E0217 14:11:36.042315 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nmzvl_openshift-machine-config-operator(f4a1ca83-1919-4f9c-82de-c849cbd50e70)\"" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" podUID="f4a1ca83-1919-4f9c-82de-c849cbd50e70" Feb 17 14:11:42 crc kubenswrapper[4833]: I0217 14:11:42.127132 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-mtcdf/must-gather-4nb2z"] Feb 17 14:11:42 crc kubenswrapper[4833]: I0217 14:11:42.127958 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-mtcdf/must-gather-4nb2z" podUID="f461fd42-3f7e-4bfb-8cd3-34a3524e27ad" containerName="copy" containerID="cri-o://66abcf5abb12714ffe2614fa4541b2247efb9fc2497b573485bd8fcda672ed53" gracePeriod=2 Feb 17 14:11:42 crc kubenswrapper[4833]: I0217 14:11:42.135476 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-mtcdf/must-gather-4nb2z"] Feb 17 14:11:42 crc kubenswrapper[4833]: I0217 14:11:42.532354 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-mtcdf_must-gather-4nb2z_f461fd42-3f7e-4bfb-8cd3-34a3524e27ad/copy/0.log" Feb 17 14:11:42 crc kubenswrapper[4833]: I0217 14:11:42.533514 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mtcdf/must-gather-4nb2z" Feb 17 14:11:42 crc kubenswrapper[4833]: I0217 14:11:42.716745 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbt6q\" (UniqueName: \"kubernetes.io/projected/f461fd42-3f7e-4bfb-8cd3-34a3524e27ad-kube-api-access-jbt6q\") pod \"f461fd42-3f7e-4bfb-8cd3-34a3524e27ad\" (UID: \"f461fd42-3f7e-4bfb-8cd3-34a3524e27ad\") " Feb 17 14:11:42 crc kubenswrapper[4833]: I0217 14:11:42.716860 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f461fd42-3f7e-4bfb-8cd3-34a3524e27ad-must-gather-output\") pod \"f461fd42-3f7e-4bfb-8cd3-34a3524e27ad\" (UID: \"f461fd42-3f7e-4bfb-8cd3-34a3524e27ad\") " Feb 17 14:11:42 crc kubenswrapper[4833]: I0217 14:11:42.722557 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f461fd42-3f7e-4bfb-8cd3-34a3524e27ad-kube-api-access-jbt6q" (OuterVolumeSpecName: "kube-api-access-jbt6q") pod "f461fd42-3f7e-4bfb-8cd3-34a3524e27ad" (UID: "f461fd42-3f7e-4bfb-8cd3-34a3524e27ad"). InnerVolumeSpecName "kube-api-access-jbt6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:11:42 crc kubenswrapper[4833]: I0217 14:11:42.803014 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f461fd42-3f7e-4bfb-8cd3-34a3524e27ad-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "f461fd42-3f7e-4bfb-8cd3-34a3524e27ad" (UID: "f461fd42-3f7e-4bfb-8cd3-34a3524e27ad"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:11:42 crc kubenswrapper[4833]: I0217 14:11:42.818454 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbt6q\" (UniqueName: \"kubernetes.io/projected/f461fd42-3f7e-4bfb-8cd3-34a3524e27ad-kube-api-access-jbt6q\") on node \"crc\" DevicePath \"\"" Feb 17 14:11:42 crc kubenswrapper[4833]: I0217 14:11:42.818724 4833 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f461fd42-3f7e-4bfb-8cd3-34a3524e27ad-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 17 14:11:43 crc kubenswrapper[4833]: I0217 14:11:43.051584 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f461fd42-3f7e-4bfb-8cd3-34a3524e27ad" path="/var/lib/kubelet/pods/f461fd42-3f7e-4bfb-8cd3-34a3524e27ad/volumes" Feb 17 14:11:43 crc kubenswrapper[4833]: I0217 14:11:43.180697 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-mtcdf_must-gather-4nb2z_f461fd42-3f7e-4bfb-8cd3-34a3524e27ad/copy/0.log" Feb 17 14:11:43 crc kubenswrapper[4833]: I0217 14:11:43.181559 4833 generic.go:334] "Generic (PLEG): container finished" podID="f461fd42-3f7e-4bfb-8cd3-34a3524e27ad" containerID="66abcf5abb12714ffe2614fa4541b2247efb9fc2497b573485bd8fcda672ed53" exitCode=143 Feb 17 14:11:43 crc kubenswrapper[4833]: I0217 14:11:43.181618 4833 scope.go:117] "RemoveContainer" containerID="66abcf5abb12714ffe2614fa4541b2247efb9fc2497b573485bd8fcda672ed53" Feb 17 14:11:43 crc kubenswrapper[4833]: I0217 14:11:43.181643 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mtcdf/must-gather-4nb2z" Feb 17 14:11:43 crc kubenswrapper[4833]: I0217 14:11:43.200309 4833 scope.go:117] "RemoveContainer" containerID="038e30feb973382cc3a0fad208909871faf017da9662cbcb8ef478e8958b56a1" Feb 17 14:11:43 crc kubenswrapper[4833]: I0217 14:11:43.283427 4833 scope.go:117] "RemoveContainer" containerID="66abcf5abb12714ffe2614fa4541b2247efb9fc2497b573485bd8fcda672ed53" Feb 17 14:11:43 crc kubenswrapper[4833]: E0217 14:11:43.284923 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66abcf5abb12714ffe2614fa4541b2247efb9fc2497b573485bd8fcda672ed53\": container with ID starting with 66abcf5abb12714ffe2614fa4541b2247efb9fc2497b573485bd8fcda672ed53 not found: ID does not exist" containerID="66abcf5abb12714ffe2614fa4541b2247efb9fc2497b573485bd8fcda672ed53" Feb 17 14:11:43 crc kubenswrapper[4833]: I0217 14:11:43.284979 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66abcf5abb12714ffe2614fa4541b2247efb9fc2497b573485bd8fcda672ed53"} err="failed to get container status \"66abcf5abb12714ffe2614fa4541b2247efb9fc2497b573485bd8fcda672ed53\": rpc error: code = NotFound desc = could not find container \"66abcf5abb12714ffe2614fa4541b2247efb9fc2497b573485bd8fcda672ed53\": container with ID starting with 66abcf5abb12714ffe2614fa4541b2247efb9fc2497b573485bd8fcda672ed53 not found: ID does not exist" Feb 17 14:11:43 crc kubenswrapper[4833]: I0217 14:11:43.285011 4833 scope.go:117] "RemoveContainer" containerID="038e30feb973382cc3a0fad208909871faf017da9662cbcb8ef478e8958b56a1" Feb 17 14:11:43 crc kubenswrapper[4833]: E0217 14:11:43.287341 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"038e30feb973382cc3a0fad208909871faf017da9662cbcb8ef478e8958b56a1\": container with ID starting with 038e30feb973382cc3a0fad208909871faf017da9662cbcb8ef478e8958b56a1 not found: ID does not exist" containerID="038e30feb973382cc3a0fad208909871faf017da9662cbcb8ef478e8958b56a1" Feb 17 14:11:43 crc kubenswrapper[4833]: I0217 14:11:43.287393 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"038e30feb973382cc3a0fad208909871faf017da9662cbcb8ef478e8958b56a1"} err="failed to get container status \"038e30feb973382cc3a0fad208909871faf017da9662cbcb8ef478e8958b56a1\": rpc error: code = NotFound desc = could not find container \"038e30feb973382cc3a0fad208909871faf017da9662cbcb8ef478e8958b56a1\": container with ID starting with 038e30feb973382cc3a0fad208909871faf017da9662cbcb8ef478e8958b56a1 not found: ID does not exist" Feb 17 14:11:47 crc kubenswrapper[4833]: I0217 14:11:47.041907 4833 scope.go:117] "RemoveContainer" containerID="e53af8c62aa13ae97cf8832a68e96b811fc441a187026ab07c19f017c0e40d9d" Feb 17 14:11:47 crc kubenswrapper[4833]: E0217 14:11:47.042698 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nmzvl_openshift-machine-config-operator(f4a1ca83-1919-4f9c-82de-c849cbd50e70)\"" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" podUID="f4a1ca83-1919-4f9c-82de-c849cbd50e70" Feb 17 14:12:00 crc kubenswrapper[4833]: I0217 14:12:00.041207 4833 scope.go:117] "RemoveContainer" containerID="e53af8c62aa13ae97cf8832a68e96b811fc441a187026ab07c19f017c0e40d9d" Feb 17 14:12:00 crc kubenswrapper[4833]: E0217 14:12:00.041943 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nmzvl_openshift-machine-config-operator(f4a1ca83-1919-4f9c-82de-c849cbd50e70)\"" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" podUID="f4a1ca83-1919-4f9c-82de-c849cbd50e70" Feb 17 14:12:12 crc kubenswrapper[4833]: I0217 14:12:12.041577 4833 scope.go:117] "RemoveContainer" containerID="e53af8c62aa13ae97cf8832a68e96b811fc441a187026ab07c19f017c0e40d9d" Feb 17 14:12:12 crc kubenswrapper[4833]: E0217 14:12:12.042394 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nmzvl_openshift-machine-config-operator(f4a1ca83-1919-4f9c-82de-c849cbd50e70)\"" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" podUID="f4a1ca83-1919-4f9c-82de-c849cbd50e70" Feb 17 14:12:23 crc kubenswrapper[4833]: I0217 14:12:23.041242 4833 scope.go:117] "RemoveContainer" containerID="e53af8c62aa13ae97cf8832a68e96b811fc441a187026ab07c19f017c0e40d9d" Feb 17 14:12:23 crc kubenswrapper[4833]: E0217 14:12:23.042217 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nmzvl_openshift-machine-config-operator(f4a1ca83-1919-4f9c-82de-c849cbd50e70)\"" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" podUID="f4a1ca83-1919-4f9c-82de-c849cbd50e70" Feb 17 14:12:34 crc kubenswrapper[4833]: I0217 14:12:34.041797 4833 scope.go:117] "RemoveContainer" containerID="e53af8c62aa13ae97cf8832a68e96b811fc441a187026ab07c19f017c0e40d9d" Feb 17 14:12:34 crc kubenswrapper[4833]: E0217 14:12:34.042550 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nmzvl_openshift-machine-config-operator(f4a1ca83-1919-4f9c-82de-c849cbd50e70)\"" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" podUID="f4a1ca83-1919-4f9c-82de-c849cbd50e70" Feb 17 14:12:49 crc kubenswrapper[4833]: I0217 14:12:49.041338 4833 scope.go:117] "RemoveContainer" containerID="e53af8c62aa13ae97cf8832a68e96b811fc441a187026ab07c19f017c0e40d9d" Feb 17 14:12:49 crc kubenswrapper[4833]: E0217 14:12:49.042020 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nmzvl_openshift-machine-config-operator(f4a1ca83-1919-4f9c-82de-c849cbd50e70)\"" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" podUID="f4a1ca83-1919-4f9c-82de-c849cbd50e70" Feb 17 14:13:00 crc kubenswrapper[4833]: I0217 14:13:00.041840 4833 scope.go:117] "RemoveContainer" containerID="e53af8c62aa13ae97cf8832a68e96b811fc441a187026ab07c19f017c0e40d9d" Feb 17 14:13:00 crc kubenswrapper[4833]: E0217 14:13:00.042475 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nmzvl_openshift-machine-config-operator(f4a1ca83-1919-4f9c-82de-c849cbd50e70)\"" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" podUID="f4a1ca83-1919-4f9c-82de-c849cbd50e70" Feb 17 14:13:13 crc kubenswrapper[4833]: I0217 14:13:13.041200 4833 scope.go:117] "RemoveContainer" containerID="e53af8c62aa13ae97cf8832a68e96b811fc441a187026ab07c19f017c0e40d9d" Feb 17 14:13:13 crc kubenswrapper[4833]: E0217 14:13:13.041988 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nmzvl_openshift-machine-config-operator(f4a1ca83-1919-4f9c-82de-c849cbd50e70)\"" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" podUID="f4a1ca83-1919-4f9c-82de-c849cbd50e70" Feb 17 14:13:24 crc kubenswrapper[4833]: I0217 14:13:24.042295 4833 scope.go:117] "RemoveContainer" containerID="e53af8c62aa13ae97cf8832a68e96b811fc441a187026ab07c19f017c0e40d9d" Feb 17 14:13:24 crc kubenswrapper[4833]: E0217 14:13:24.043157 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nmzvl_openshift-machine-config-operator(f4a1ca83-1919-4f9c-82de-c849cbd50e70)\"" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" podUID="f4a1ca83-1919-4f9c-82de-c849cbd50e70" Feb 17 14:13:37 crc kubenswrapper[4833]: I0217 14:13:37.041243 4833 scope.go:117] "RemoveContainer" containerID="e53af8c62aa13ae97cf8832a68e96b811fc441a187026ab07c19f017c0e40d9d" Feb 17 14:13:37 crc kubenswrapper[4833]: E0217 14:13:37.041833 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nmzvl_openshift-machine-config-operator(f4a1ca83-1919-4f9c-82de-c849cbd50e70)\"" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" podUID="f4a1ca83-1919-4f9c-82de-c849cbd50e70" Feb 17 14:13:48 crc kubenswrapper[4833]: I0217 14:13:48.042009 4833 scope.go:117] "RemoveContainer" containerID="e53af8c62aa13ae97cf8832a68e96b811fc441a187026ab07c19f017c0e40d9d" Feb 17 14:13:48 crc kubenswrapper[4833]: E0217 14:13:48.042816 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nmzvl_openshift-machine-config-operator(f4a1ca83-1919-4f9c-82de-c849cbd50e70)\"" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" podUID="f4a1ca83-1919-4f9c-82de-c849cbd50e70" Feb 17 14:14:03 crc kubenswrapper[4833]: I0217 14:14:03.041565 4833 scope.go:117] "RemoveContainer" containerID="e53af8c62aa13ae97cf8832a68e96b811fc441a187026ab07c19f017c0e40d9d" Feb 17 14:14:03 crc kubenswrapper[4833]: E0217 14:14:03.042149 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nmzvl_openshift-machine-config-operator(f4a1ca83-1919-4f9c-82de-c849cbd50e70)\"" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" podUID="f4a1ca83-1919-4f9c-82de-c849cbd50e70" Feb 17 14:14:16 crc kubenswrapper[4833]: I0217 14:14:16.041322 4833 scope.go:117] "RemoveContainer" containerID="e53af8c62aa13ae97cf8832a68e96b811fc441a187026ab07c19f017c0e40d9d" Feb 17 14:14:16 crc kubenswrapper[4833]: E0217 14:14:16.042124 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nmzvl_openshift-machine-config-operator(f4a1ca83-1919-4f9c-82de-c849cbd50e70)\"" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" podUID="f4a1ca83-1919-4f9c-82de-c849cbd50e70" Feb 17 14:14:30 crc kubenswrapper[4833]: I0217 14:14:30.041326 4833 scope.go:117] "RemoveContainer" containerID="e53af8c62aa13ae97cf8832a68e96b811fc441a187026ab07c19f017c0e40d9d" Feb 17 14:14:30 crc kubenswrapper[4833]: E0217 14:14:30.042018 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nmzvl_openshift-machine-config-operator(f4a1ca83-1919-4f9c-82de-c849cbd50e70)\"" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" podUID="f4a1ca83-1919-4f9c-82de-c849cbd50e70" Feb 17 14:14:41 crc kubenswrapper[4833]: I0217 14:14:41.045208 4833 scope.go:117] "RemoveContainer" containerID="e53af8c62aa13ae97cf8832a68e96b811fc441a187026ab07c19f017c0e40d9d" Feb 17 14:14:41 crc kubenswrapper[4833]: E0217 14:14:41.046098 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nmzvl_openshift-machine-config-operator(f4a1ca83-1919-4f9c-82de-c849cbd50e70)\"" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" podUID="f4a1ca83-1919-4f9c-82de-c849cbd50e70" Feb 17 14:14:55 crc kubenswrapper[4833]: I0217 14:14:55.041112 4833 scope.go:117] "RemoveContainer" containerID="e53af8c62aa13ae97cf8832a68e96b811fc441a187026ab07c19f017c0e40d9d" Feb 17 14:14:55 crc kubenswrapper[4833]: I0217 14:14:55.972326 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nmzvl" event={"ID":"f4a1ca83-1919-4f9c-82de-c849cbd50e70","Type":"ContainerStarted","Data":"aac74602ad4d47e306ce9de39bd260794d5b7b445bc73446bc1bc443e12e6647"} Feb 17 14:15:00 crc kubenswrapper[4833]: I0217 14:15:00.154632 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522295-fp4n4"] Feb 17 14:15:00 crc kubenswrapper[4833]: E0217 14:15:00.155447 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5bb307b-97df-4a9c-8ce7-f9957d70a7b4" containerName="extract-content" Feb 17 14:15:00 crc kubenswrapper[4833]: I0217 14:15:00.155459 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5bb307b-97df-4a9c-8ce7-f9957d70a7b4" containerName="extract-content" Feb 17 14:15:00 crc kubenswrapper[4833]: E0217 14:15:00.155474 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f461fd42-3f7e-4bfb-8cd3-34a3524e27ad" containerName="gather" Feb 17 14:15:00 crc kubenswrapper[4833]: I0217 14:15:00.155480 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f461fd42-3f7e-4bfb-8cd3-34a3524e27ad" containerName="gather" Feb 17 14:15:00 crc kubenswrapper[4833]: E0217 14:15:00.155499 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f461fd42-3f7e-4bfb-8cd3-34a3524e27ad" containerName="copy" Feb 17 14:15:00 crc kubenswrapper[4833]: I0217 14:15:00.155508 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f461fd42-3f7e-4bfb-8cd3-34a3524e27ad" containerName="copy" Feb 17 14:15:00 crc kubenswrapper[4833]: E0217 14:15:00.155520 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5bb307b-97df-4a9c-8ce7-f9957d70a7b4" containerName="registry-server" Feb 17 14:15:00 crc kubenswrapper[4833]: I0217 14:15:00.155526 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5bb307b-97df-4a9c-8ce7-f9957d70a7b4" containerName="registry-server" Feb 17 14:15:00 crc kubenswrapper[4833]: E0217 14:15:00.155537 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5bb307b-97df-4a9c-8ce7-f9957d70a7b4" containerName="extract-utilities" Feb 17 14:15:00 crc kubenswrapper[4833]: I0217 14:15:00.155543 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5bb307b-97df-4a9c-8ce7-f9957d70a7b4" containerName="extract-utilities" Feb 17 14:15:00 crc kubenswrapper[4833]: I0217 14:15:00.155665 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="f461fd42-3f7e-4bfb-8cd3-34a3524e27ad" containerName="gather" Feb 17 14:15:00 crc kubenswrapper[4833]: I0217 14:15:00.155679 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5bb307b-97df-4a9c-8ce7-f9957d70a7b4" containerName="registry-server" Feb 17 14:15:00 crc kubenswrapper[4833]: I0217 14:15:00.155688 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="f461fd42-3f7e-4bfb-8cd3-34a3524e27ad" containerName="copy" Feb 17 14:15:00 crc kubenswrapper[4833]: I0217 14:15:00.156147 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522295-fp4n4" Feb 17 14:15:00 crc kubenswrapper[4833]: I0217 14:15:00.160221 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 14:15:00 crc kubenswrapper[4833]: I0217 14:15:00.160351 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 14:15:00 crc kubenswrapper[4833]: I0217 14:15:00.176973 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522295-fp4n4"] Feb 17 14:15:00 crc kubenswrapper[4833]: I0217 14:15:00.588671 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/35933a24-dca2-44b2-8253-6d5a52e97bce-secret-volume\") pod \"collect-profiles-29522295-fp4n4\" (UID: \"35933a24-dca2-44b2-8253-6d5a52e97bce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522295-fp4n4" Feb 17 14:15:00 crc kubenswrapper[4833]: I0217 14:15:00.588776 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7twc\" (UniqueName: \"kubernetes.io/projected/35933a24-dca2-44b2-8253-6d5a52e97bce-kube-api-access-q7twc\") pod \"collect-profiles-29522295-fp4n4\" (UID: \"35933a24-dca2-44b2-8253-6d5a52e97bce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522295-fp4n4" Feb 17 14:15:00 crc kubenswrapper[4833]: I0217 14:15:00.588837 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/35933a24-dca2-44b2-8253-6d5a52e97bce-config-volume\") pod \"collect-profiles-29522295-fp4n4\" (UID: \"35933a24-dca2-44b2-8253-6d5a52e97bce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522295-fp4n4" Feb 17 14:15:00 crc kubenswrapper[4833]: I0217 14:15:00.690095 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/35933a24-dca2-44b2-8253-6d5a52e97bce-secret-volume\") pod \"collect-profiles-29522295-fp4n4\" (UID: \"35933a24-dca2-44b2-8253-6d5a52e97bce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522295-fp4n4" Feb 17 14:15:00 crc kubenswrapper[4833]: I0217 14:15:00.690205 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7twc\" (UniqueName: \"kubernetes.io/projected/35933a24-dca2-44b2-8253-6d5a52e97bce-kube-api-access-q7twc\") pod \"collect-profiles-29522295-fp4n4\" (UID: \"35933a24-dca2-44b2-8253-6d5a52e97bce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522295-fp4n4" Feb 17 14:15:00 crc kubenswrapper[4833]: I0217 14:15:00.690274 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/35933a24-dca2-44b2-8253-6d5a52e97bce-config-volume\") pod \"collect-profiles-29522295-fp4n4\" (UID: \"35933a24-dca2-44b2-8253-6d5a52e97bce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522295-fp4n4" Feb 17 14:15:00 crc kubenswrapper[4833]: I0217 14:15:00.691477 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/35933a24-dca2-44b2-8253-6d5a52e97bce-config-volume\") pod \"collect-profiles-29522295-fp4n4\" (UID: \"35933a24-dca2-44b2-8253-6d5a52e97bce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522295-fp4n4" Feb 17 14:15:00 crc kubenswrapper[4833]: I0217 14:15:00.699841 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/35933a24-dca2-44b2-8253-6d5a52e97bce-secret-volume\") pod \"collect-profiles-29522295-fp4n4\" (UID: \"35933a24-dca2-44b2-8253-6d5a52e97bce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522295-fp4n4" Feb 17 14:15:00 crc kubenswrapper[4833]: I0217 14:15:00.722745 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7twc\" (UniqueName: \"kubernetes.io/projected/35933a24-dca2-44b2-8253-6d5a52e97bce-kube-api-access-q7twc\") pod \"collect-profiles-29522295-fp4n4\" (UID: \"35933a24-dca2-44b2-8253-6d5a52e97bce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522295-fp4n4" Feb 17 14:15:00 crc kubenswrapper[4833]: I0217 14:15:00.800017 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522295-fp4n4" Feb 17 14:15:01 crc kubenswrapper[4833]: I0217 14:15:01.196903 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522295-fp4n4"] Feb 17 14:15:02 crc kubenswrapper[4833]: I0217 14:15:02.037810 4833 generic.go:334] "Generic (PLEG): container finished" podID="35933a24-dca2-44b2-8253-6d5a52e97bce" containerID="e533d6b8b8493fce388b7f524c87a37b65d2a43b47877a263ff1350171d13d52" exitCode=0 Feb 17 14:15:02 crc kubenswrapper[4833]: I0217 14:15:02.037887 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522295-fp4n4" event={"ID":"35933a24-dca2-44b2-8253-6d5a52e97bce","Type":"ContainerDied","Data":"e533d6b8b8493fce388b7f524c87a37b65d2a43b47877a263ff1350171d13d52"} Feb 17 14:15:02 crc kubenswrapper[4833]: I0217 14:15:02.038359 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522295-fp4n4" event={"ID":"35933a24-dca2-44b2-8253-6d5a52e97bce","Type":"ContainerStarted","Data":"858a77c109fc456b76acda1868b24401f08a2efde9c156af189d36a6d2b589ee"} Feb 17 14:15:03 crc kubenswrapper[4833]: I0217 14:15:03.375016 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522295-fp4n4" Feb 17 14:15:03 crc kubenswrapper[4833]: I0217 14:15:03.437852 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/35933a24-dca2-44b2-8253-6d5a52e97bce-config-volume\") pod \"35933a24-dca2-44b2-8253-6d5a52e97bce\" (UID: \"35933a24-dca2-44b2-8253-6d5a52e97bce\") " Feb 17 14:15:03 crc kubenswrapper[4833]: I0217 14:15:03.437927 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7twc\" (UniqueName: \"kubernetes.io/projected/35933a24-dca2-44b2-8253-6d5a52e97bce-kube-api-access-q7twc\") pod \"35933a24-dca2-44b2-8253-6d5a52e97bce\" (UID: \"35933a24-dca2-44b2-8253-6d5a52e97bce\") " Feb 17 14:15:03 crc kubenswrapper[4833]: I0217 14:15:03.438064 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/35933a24-dca2-44b2-8253-6d5a52e97bce-secret-volume\") pod \"35933a24-dca2-44b2-8253-6d5a52e97bce\" (UID: \"35933a24-dca2-44b2-8253-6d5a52e97bce\") " Feb 17 14:15:03 crc kubenswrapper[4833]: I0217 14:15:03.438885 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35933a24-dca2-44b2-8253-6d5a52e97bce-config-volume" (OuterVolumeSpecName: "config-volume") pod "35933a24-dca2-44b2-8253-6d5a52e97bce" (UID: "35933a24-dca2-44b2-8253-6d5a52e97bce"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:15:03 crc kubenswrapper[4833]: I0217 14:15:03.443600 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35933a24-dca2-44b2-8253-6d5a52e97bce-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "35933a24-dca2-44b2-8253-6d5a52e97bce" (UID: "35933a24-dca2-44b2-8253-6d5a52e97bce"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:15:03 crc kubenswrapper[4833]: I0217 14:15:03.445129 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35933a24-dca2-44b2-8253-6d5a52e97bce-kube-api-access-q7twc" (OuterVolumeSpecName: "kube-api-access-q7twc") pod "35933a24-dca2-44b2-8253-6d5a52e97bce" (UID: "35933a24-dca2-44b2-8253-6d5a52e97bce"). InnerVolumeSpecName "kube-api-access-q7twc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:15:03 crc kubenswrapper[4833]: I0217 14:15:03.539988 4833 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/35933a24-dca2-44b2-8253-6d5a52e97bce-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 14:15:03 crc kubenswrapper[4833]: I0217 14:15:03.540020 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7twc\" (UniqueName: \"kubernetes.io/projected/35933a24-dca2-44b2-8253-6d5a52e97bce-kube-api-access-q7twc\") on node \"crc\" DevicePath \"\"" Feb 17 14:15:03 crc kubenswrapper[4833]: I0217 14:15:03.540030 4833 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/35933a24-dca2-44b2-8253-6d5a52e97bce-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 14:15:04 crc kubenswrapper[4833]: I0217 14:15:04.051824 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522295-fp4n4" event={"ID":"35933a24-dca2-44b2-8253-6d5a52e97bce","Type":"ContainerDied","Data":"858a77c109fc456b76acda1868b24401f08a2efde9c156af189d36a6d2b589ee"} Feb 17 14:15:04 crc kubenswrapper[4833]: I0217 14:15:04.052216 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="858a77c109fc456b76acda1868b24401f08a2efde9c156af189d36a6d2b589ee" Feb 17 14:15:04 crc kubenswrapper[4833]: I0217 14:15:04.051858 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522295-fp4n4" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515145074026024451 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015145074027017367 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015145070051016503 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015145070051015453 5ustar corecore